query_id
stringlengths 32
32
| query
stringlengths 7
2.91k
| positive_passages
listlengths 1
7
| negative_passages
listlengths 10
100
| subset
stringclasses 7
values |
---|---|---|---|---|
a2e3f2cacf957a3c4c284d7e51d9ad1e | Reappraisal reconsidered : A closer look at the costs of an acclaimed emotion regulation strategy | [
{
"docid": "75b4640071754d331783d26020f9ac7a",
"text": "Traditionally, positive emotions and thoughts, strengths, and the satisfaction of basic psychological needs for belonging, competence, and autonomy have been seen as the cornerstones of psychological health. Without disputing their importance, these foci fail to capture many of the fluctuating, conflicting forces that are readily apparent when people navigate the environment and social world. In this paper, we review literature to offer evidence for the prominence of psychological flexibility in understanding psychological health. Thus far, the importance of psychological flexibility has been obscured by the isolation and disconnection of research conducted on this topic. Psychological flexibility spans a wide range of human abilities to: recognize and adapt to various situational demands; shift mindsets or behavioral repertoires when these strategies compromise personal or social functioning; maintain balance among important life domains; and be aware, open, and committed to behaviors that are congruent with deeply held values. In many forms of psychopathology, these flexibility processes are absent. In hopes of creating a more coherent understanding, we synthesize work in emotion regulation, mindfulness and acceptance, social and personality psychology, and neuropsychology. Basic research findings provide insight into the nature, correlates, and consequences of psychological flexibility and applied research provides details on promising interventions. Throughout, we emphasize dynamic approaches that might capture this fluid construct in the real-world.",
"title": ""
},
{
"docid": "9f5fac3ed88722d5d1be43ff92ba9450",
"text": "We examined the relationships between six emotion-regulation strategies (acceptance, avoidance, problem solving, reappraisal, rumination, and suppression) and symptoms of four psychopathologies (anxiety, depression, eating, and substance-related disorders). We combined 241 effect sizes from 114 studies that examined the relationships between dispositional emotion regulation and psychopathology. We focused on dispositional emotion regulation in order to assess patterns of responding to emotion over time. First, we examined the relationship between each regulatory strategy and psychopathology across the four disorders. We found a large effect size for rumination, medium to large for avoidance, problem solving, and suppression, and small to medium for reappraisal and acceptance. These results are surprising, given the prominence of reappraisal and acceptance in treatment models, such as cognitive-behavioral therapy and acceptance-based treatments, respectively. Second, we examined the relationship between each regulatory strategy and each of the four psychopathology groups. We found that internalizing disorders were more consistently associated with regulatory strategies than externalizing disorders. Lastly, many of our analyses showed that whether the sample came from a clinical or normative population significantly moderated the relationships. This finding underscores the importance of adopting a multi-sample approach to the study of psychopathology.",
"title": ""
},
{
"docid": "93bca110f5551d8e62dc09328de83d4f",
"text": "It is well established that emotion plays a key role in human social and economic decision making. The recent literature on emotion regulation (ER), however, highlights that humans typically make efforts to control emotion experiences. This leaves open the possibility that decision effects previously attributed to acute emotion may be a consequence of acute ER strategies such as cognitive reappraisal and expressive suppression. In Study 1, we manipulated ER of laboratory-induced fear and disgust, and found that the cognitive reappraisal of these negative emotions promotes risky decisions (reduces risk aversion) in the Balloon Analogue Risk Task and is associated with increased performance in the prehunch/hunch period of the Iowa Gambling Task. In Study 2, we found that naturally occurring negative emotions also increase risk aversion in Balloon Analogue Risk Task, but the incidental use of cognitive reappraisal of emotions impedes this effect. We offer evidence that the increased effectiveness of cognitive reappraisal in reducing the experience of emotions underlies its beneficial effects on decision making.",
"title": ""
}
] | [
{
"docid": "b87cf41b31b8d163d6e44c9b1fa68cae",
"text": "This paper gives a security analysis of Microsoft's ASP.NET technology. The main part of the paper is a list of threats which is structured according to an architecture of Web services and attack points. We also give a reverse table of threats against security requirements as well as a summary of security guidelines for IT developers. This paper has been worked out in collaboration with five University teams each of which is focussing on a different security problem area. We use the same architecture for Web services and attack points.",
"title": ""
},
{
"docid": "df5cf5cd42e216ef723a6e2295a92f02",
"text": "This integrative literature review assesses the relationship between hospital nurses' work environment characteristics and patient safety outcomes and recommends directions for future research based on examination of the literature. Using an electronic search of five databases, 18 studies published in English between 1999 and 2016 were identified for review. All but one study used a cross-sectional design, and only four used a conceptual/theoretical framework to guide the research. No definition of work environment was provided in most studies. Differing variables and instruments were used to measure patient outcomes, and findings regarding the effects of work environment on patient outcomes were inconsistent. To clarify the relationship between nurses' work environment characteristics and patient safety outcomes, researchers should consider using a longitudinal study design, using a theoretical foundation, and providing clear operational definitions of concepts. Moreover, given the inconsistent findings of previous studies, they should choose their measurement methodologies with care.",
"title": ""
},
{
"docid": "d043a086f143c713e4c4e74c38e3040c",
"text": "Background: The NASA Metrics Data Program data sets have been heavily used in software defect prediction experiments. Aim: To demonstrate and explain why these data sets require significant pre-processing in order to be suitable for defect prediction. Method: A meticulously documented data cleansing process involving all 13 of the original NASA data sets. Results: Post our novel data cleansing process; each of the data sets had between 6 to 90 percent less of their original number of recorded values. Conclusions: One: Researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Two: Defect prediction data sets could benefit from lower level code metrics in addition to those more commonly used, as these will help to distinguish modules, reducing the likelihood of repeated data points. Three: The bulk of defect prediction experiments based on the NASA Metrics Data Program data sets may have led to erroneous findings. This is mainly due to repeated data points potentially causing substantial amounts of training and testing data to be identical.",
"title": ""
},
{
"docid": "6f8cc4d648f223840ca67550f1a3b6dd",
"text": "Information interaction system plays an important role in establishing a real-time and high-efficient traffic management platform in Intelligent Transportation System (ITS) applications. However, the present transmission technology still exists some defects in satisfying with the real-time performance of users data demand in Vehicle-to-Vehicle (V2V) communication. In order to solve this problem, this paper puts forward a novel Node Operating System (NDOS) scheme to realize the real-time data exchange between vehicles with wireless communication chips of mobile devices, and creates a distributed information interaction system for the interoperability between devices from various manufacturers. In addition, optimized data forwarding scheme is discussed for NDOS to achieve better transmission property and channel resource utilization. Experiments have been carried out in Network Simulator 2 (NS2) evaluation environment, and the results suggest that the scheme can receive higher transmission efficiency and validity than existing communication skills.",
"title": ""
},
{
"docid": "6b70f1ab7f836d5a2681c3f998393ed3",
"text": "FOREST FIRES CAUSE MANY ENVIronmental disasters, creating economical and ecological damage as well as endangering people’s lives. Heightened interest in automatic surveillance and early forest-fire detection has taken precedence over traditional human surveillance because the latter’s subjectivity affects detection reliability, which is the main issue for forest-fire detection systems. In current systems, the process is tedious, and human operators must manually validate many false alarms. Our approach—the False Alarm Reduction system—proposes an alternative realtime infrared–visual system that overcomes this problem. The FAR system consists of applying new infrared-image processing techniques and Artificial Neural Networks (ANNs), using additional information from meteorological sensors and from a geographical information database, taking advantage of the information redundancy from visual and infrared cameras through a matching process, and designing a fuzzy expert rule base to develop a decision function. Furthermore, the system provides the human operator with new software tools to verify alarms.",
"title": ""
},
{
"docid": "98ed823294928f0f281c36d5ae0a6071",
"text": "Entity matching is a crucial and difficult task for data integration. An effective solution strategy typically has to combine several techniques and to find suitable settings for critical configuration parameters such as similarity thresholds. Supervised (trainingbased) approaches promise to reduce the manual work for determining (learning) effective strategies for entity matching. However, they critically depend on training data selection which is a difficult problem that has so far mostly been addressed manually by human experts. In this paper we propose a trainingbased framework called STEM for entity matching and present different generic methods for automatically selecting training data to combine and configure several matching techniques. We evaluate the proposed methods for different match tasks and smalland medium-sized training sets.",
"title": ""
},
{
"docid": "7a5e65dde7af8fe05654ea9d5c3b7861",
"text": "The objective of this paper is to provide a comparison among permanent magnet (PM) wind generators of different topologies. Seven configurations are chosen for the comparison, consisting of both radial-flux and axial-flux machines. The comparison is done at seven power levels ranging from 1 to 200 kW. The basis for the comparison is discussed and implemented in detail in the design procedure. The criteria used for comparison are considered to be critical for the efficient deployment of PM wind generators. The design data are optimized and verified by finite-element analysis and commercial generator test results. For a given application, the results provide an indication of the best-suited machine.",
"title": ""
},
{
"docid": "ebf7457391e8f1e728508f9b5af7a19f",
"text": "Argument mining studies in natural language text often use lexical (e.g. n-grams) and syntactic (e.g. grammatical production rules) features with all possible values. In prior work on a corpus of academic essays, we demonstrated that such large and sparse feature spaces can cause difficulty for feature selection and proposed a method to design a more compact feature space. The proposed feature design is based on post-processing a topic model to extract argument and domain words. In this paper we investigate the generality of this approach, by applying our methodology to a new corpus of persuasive essays. Our experiments show that replacing n-grams and syntactic rules with features and constraints using extracted argument and domain words significantly improves argument mining performance for persuasive essays.",
"title": ""
},
{
"docid": "5516a1459b44b340c930e8a2ed3ca152",
"text": "Laboratory testing is important in the diagnosis and monitoring of liver injury and disease. Current liver tests include plasma markers of injury (e.g. aminotransferases, γ-glutamyl transferase, and alkaline phosphatase), markers of function (e.g. prothrombin time, bilirubin), viral hepatitis serologies, and markers of proliferation (e.g. α-fetoprotein). Among the injury markers, the alanine and aspartate aminotransferases (ALT and AST, respectively) are the most commonly used. However, interpretation of ALT and AST plasma levels can be complicated. Furthermore, both have poor prognostic utility in acute liver injury and liver failure. New biomarkers of liver injury are rapidly being developed, and the US Food and Drug Administration the European Medicines Agency have recently expressed support for use of some of these biomarkers in drug trials. The purpose of this paper is to review the history of liver biomarkers, to summarize mechanisms and interpretation of ALT and AST elevation in plasma in liver injury (particularly acute liver injury), and to discuss emerging liver injury biomarkers that may complement or even replace ALT and AST in the future.",
"title": ""
},
{
"docid": "5d85e552841fe415daa72dff2a5f9706",
"text": "M any security faculty members and practitioners bemoan the lack of good books in the field. Those of us who teach often find ourselves forced to rely on collections of papers to fortify our courses. In the last few years, however, we've started to see the appearance of some high-quality books to support our endeavors. Matt Bishop's book—Com-puter Security: Art and Science—is definitely hefty and packed with lots of information. It's a large book (with more than 1,000 pages), and it covers most any computer security topic that might be of interest. section discusses basic security issues at the definitional level. The Policy section addresses the relationship between policy and security, examining several types of policies in the process. Implementation I covers cryptography and its role in security. Implementation II describes how to apply policy requirements in systems. The Assurance section, which Elisabeth Sullivan wrote, introduces assurance basics and formal methods. The Special Topics section discusses malicious logic, vulnerability analysis , auditing, and intrusion detection. Finally, the Practicum ties all the previously discussed material to real-world examples. A ninth additional section, called End Matter, discusses miscellaneous supporting mathematical topics and concludes with an example. At a publisher's list price of US$74.99, you'll want to know why you should consider buying such an expensive book. Several things set it apart from other, similar, offerings. Most importantly , the book provides numerous examples and, refreshingly, definitions. A vertical bar alongside the examples distinguishes them from other text, so picking them out is easy. The book also includes a bibliography of over 1,000 references. Additionally, each chapter includes a summary, suggestions for further reading, research issues, and practice exercises. The format and layout are good, and the fonts are readable. The book is aimed at several audiences , and the preface describes many roadmaps, one of which discusses dependencies among the various chapters. Instructors can use it at the advanced undergraduate level or for introductory graduate-level computer-security courses. The preface also includes a mapping of suggested topics for undergraduate and graduate courses, presuming a certain amount of math and theoretical computer-science background as prerequisites. Practitioners can use the book as a resource for information on specific topics; the examples in the Practicum are ideally suited for them. So, what's the final verdict? Practitioners will want to consider this book as a reference to add to their bookshelves. Teachers of advanced undergraduate or introductory …",
"title": ""
},
{
"docid": "0e1d93bb8b1b2d2e3453384092f39afc",
"text": "Repetitive or prolonged head flexion posture while using a smartphone is known as one of risk factors for pain symptoms in the neck. To quantitatively assess the amount and range of head flexion of smartphone users, head forward flexion angle was measured from 18 participants when they were conducing three common smartphone tasks (text messaging, web browsing, video watching) while sitting and standing in a laboratory setting. It was found that participants maintained head flexion of 33-45° (50th percentile angle) from vertical when using the smartphone. The head flexion angle was significantly larger (p < 0.05) for text messaging than for the other tasks, and significantly larger while sitting than while standing. Study results suggest that text messaging, which is one of the most frequently used app categories of smartphone, could be a main contributing factor to the occurrence of neck pain of heavy smartphone users. Practitioner Summary: In this laboratory study, the severity of head flexion of smartphone users was quantitatively evaluated when conducting text messaging, web browsing and video watching while sitting and standing. Study results indicate that text messaging while sitting caused the largest head flexion than that of other task conditions.",
"title": ""
},
{
"docid": "93810beca2ba988e29852cd1bc4b8ab6",
"text": "Emotion dysregulation is thought to be critical to the development of negative psychological outcomes. Gross (1998b) conceptualized the timing of regulation strategies as key to this relationship, with response-focused strategies, such as expressive suppression, as less effective and more detrimental compared to antecedent-focused ones, such as cognitive reappraisal. In the current study, we examined the relationship between reappraisal and expressive suppression and measures of psychopathology, particularly for stress-related reactions, in both undergraduate and trauma-exposed community samples of women. Generally, expressive suppression was associated with higher, and reappraisal with lower, self-reported stress-related symptoms. In particular, expressive suppression was associated with PTSD, anxiety, and depression symptoms in the trauma-exposed community sample, with rumination partially mediating this association. Finally, based on factor analysis, expressive suppression and cognitive reappraisal appear to be independent constructs. Overall, expressive suppression, much more so than cognitive reappraisal, may play an important role in the experience of stress-related symptoms. Further, given their independence, there are potentially relevant clinical implications, as interventions that shift one of these emotion regulation strategies may not lead to changes in the other.",
"title": ""
},
{
"docid": "b93455e6b023910bf7711d56d16f62a2",
"text": "Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict what drugs are likely to target proteins involved with both diseases X and Y?—a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries—a flexible but tractable subset of first-order logic—on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.",
"title": ""
},
{
"docid": "5b56288bb7b49f18148f28798cfd8129",
"text": "According to World Health Organization (WHO) estimations, one out of five adults worldwide will be obese by 2025. Worldwide obesity has doubled since 1980. In fact, more than 1.9 billion adults (39%) of 18 years and older were overweight and over 600 million (13%) of these were obese in 2014. 42 million children under the age of five were overweight or obese in 2014. Obesity is a top public health problem due to its associated morbidity and mortality. This paper reviews the main techniques to measure the level of obesity and body fat percentage, and explains the complications that can carry to the individual's quality of life, longevity and the significant cost of healthcare systems. Researchers and developers are adapting the existing technology, as intelligent phones or some wearable gadgets to be used for controlling obesity. They include the promoting of healthy eating culture and adopting the physical activity lifestyle. The paper also shows a comprehensive study of the most used mobile applications and Wireless Body Area Networks focused on controlling the obesity and overweight. Finally, this paper proposes an intelligent architecture that takes into account both, physiological and cognitive aspects to reduce the degree of obesity and overweight.",
"title": ""
},
{
"docid": "c71a5f23d9d8b9093ca1b2ccdb3d396a",
"text": "1 M.Tech. Student 2 Assistant Professor 1,2 Department of Computer Science and Engineering 1,2 Don Bosco Institute of Technology, Affiliated by VTU Abstract— In the recent years Sentiment analysis (SA) has gained momentum by the increase of social networking sites. Sentiment analysis has been an important topic for data mining, social media for classifying reviews and thereby rating the entities such as products, movies etc. This paper represents a comparative study of sentiment classification of lexicon based approach and naive bayes classifier of machine learning in sentiment analysis.",
"title": ""
},
{
"docid": "6fb72f68aa41a71ea51b81806d325561",
"text": "An important aspect related to the development of face-aging algorithms is the evaluation of the ability of such algorithms to produce accurate age-progressed faces. In most studies reported in the literature, the performance of face-aging systems is established based either on the judgment of human observers or by using machine-based evaluation methods. In this paper we perform an experimental evaluation that aims to assess the applicability of human-based against typical machine based performance evaluation methods. The results of our experiments indicate that machines can be more accurate in determining the performance of face-aging algorithms. Our work aims towards the development of a complete evaluation framework for age progression methodologies.",
"title": ""
},
{
"docid": "dad1c5e4aa43b9fc2b3592799f9a3a69",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.05.068 ⇑ Tel.: +886 7 3814526. E-mail address: leechung@mail.ee.kuas.edu.tw Due to the explosive growth of social-media applications, enhancing event-awareness by social mining has become extremely important. The contents of microblogs preserve valuable information associated with past disastrous events and stories. To learn the experiences from past events for tackling emerging real-world events, in this work we utilize the social-media messages to characterize real-world events through mining their contents and extracting essential features for relatedness analysis. On one hand, we established an online clustering approach on Twitter microblogs for detecting emerging events, and meanwhile we performed event relatedness evaluation using an unsupervised clustering approach. On the other hand, we developed a supervised learning model to create extensible measure metrics for offline evaluation of event relatedness. By means of supervised learning, our developed measure metrics are able to compute relatedness of various historical events, allowing the event impacts on specified domains to be quantitatively measured for event comparison. By combining the strengths of both methods, the experimental results showed that the combined framework in our system is sensible for discovering more unknown knowledge about event impacts and enhancing event awareness. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "77f8f90edd85f1af6de8089808153dd7",
"text": "Distributed coding is a new paradigm for video compression, based on Slepian and Wolf's and Wyner and Ziv's information-theoretic results from the 1970s. This paper reviews the recent development of practical distributed video coding schemes. Wyner-Ziv coding, i.e., lossy compression with receiver side information, enables low-complexity video encoding where the bulk of the computation is shifted to the decoder. Since the interframe dependence of the video sequence is exploited only at the decoder, an intraframe encoder can be combined with an interframe decoder. The rate-distortion performance is superior to conventional intraframe coding, but there is still a gap relative to conventional motion-compensated interframe coding. Wyner-Ziv coding is naturally robust against transmission errors and can be used for joint source-channel coding. A Wyner-Ziv MPEG encoder that protects the video waveform rather than the compressed bit stream achieves graceful degradation under deteriorating channel conditions without a layered signal representation.",
"title": ""
},
{
"docid": "91ef2853e45d9b82f92689e0b01e6d63",
"text": "BACKGROUND\nThis study sought to evaluate the efficacy of nonoperative compression in correcting pectus carinatum in children.\n\n\nMATERIALS AND METHODS\nChildren presenting with pectus carinatum between August 1999 and January 2004 were prospectively enrolled in this study. The management protocol included custom compressive bracing, strengthening exercises, and frequent clinical follow-up.\n\n\nRESULTS\nThere were 30 children seen for evaluation. Their mean age was 13 years (range, 3-16 years) and there were 26 boys and 4 girls. Of the 30 original patients, 6 never returned to obtain the brace, leaving 24 patients in the study. Another 4 subjects were lost to follow-up. For the remaining 20 patients who have either completed treatment or continue in the study, the mean duration of bracing was 16 months, involving an average of 3 follow-up visits and 2 brace adjustments. Five of these patients had little or no improvement due to either too short a follow-up or noncompliance with the bracing. The other 15 patients (75%) had a significant to complete correction. There were no complications encountered during the study period.\n\n\nCONCLUSION\nCompressive orthotic bracing is a safe and effective alternative to both invasive surgical correction and no treatment for pectus carinatum in children. Compliance is critical to the success of this management strategy.",
"title": ""
},
{
"docid": "b093976428f2125a7186d5f4b641292c",
"text": "CONTEXT\nDehydroepiandrosterone (DHEA) and DHEA sulfate (DHEAS) are the major circulating adrenal steroids and substrates for peripheral sex hormone biosynthesis. In Addison's disease, glucocorticoid and mineralocorticoid deficiencies require lifelong replacement, but the associated near-total failure of DHEA synthesis is not typically corrected.\n\n\nOBJECTIVE AND DESIGN\nIn a double-blind trial, we randomized 106 subjects (44 males, 62 females) with Addison's disease to receive either 50 mg daily of micronized DHEA or placebo orally for 12 months to evaluate its longer-term effects on bone mineral density, body composition, and cognitive function together with well-being and fatigue.\n\n\nRESULTS\nCirculating DHEAS and androstenedione rose significantly in both sexes, with testosterone increasing to low normal levels only in females. DHEA reversed ongoing loss of bone mineral density at the femoral neck (P < 0.05) but not at other sites; DHEA enhanced total body (P = 0.02) and truncal (P = 0.017) lean mass significantly with no change in fat mass. At baseline, subscales of psychological well-being in questionnaires (Short Form-36, General Health Questionnaire-30), were significantly worse in Addison's patients vs. control populations (P < 0.001), and one subscale of SF-36 improved significantly (P = 0.004) after DHEA treatment. There was no significant benefit of DHEA treatment on fatigue or cognitive or sexual function. Supraphysiological DHEAS levels were achieved in some older females who experienced mild androgenic side effects.\n\n\nCONCLUSION\nAlthough further long-term studies of DHEA therapy, with dosage adjustment, are desirable, our results support some beneficial effects of prolonged DHEA treatment in Addison's disease.",
"title": ""
}
] | scidocsrr |
8d6345ae1dbe14185089ee6bb06dc57f | Learning from Examples as an Inverse Problem | [
{
"docid": "f51a854a390be7d6980b49aea2e955cf",
"text": "The purpose of this paper is to provide a PAC error analysis for the q-norm soft margin classifier, a support vector machine classification algorithm. It consists of two parts: regularization error and sample error. While many techniques are available for treating the sample error, much less is known for the regularization error and the corresponding approximation error for reproducing kernel Hilbert spaces. We are mainly concerned about the regularization error. It is estimated for general distributions by a K-functional in weighted L spaces. For weakly separable distributions (i.e., the margin may be zero) satisfactory convergence rates are provided by means of separating functions. A projection operator is introduced, which leads to better sample error estimates especially for small complexity kernels. The misclassification error is bounded by the V -risk associated with a general class of loss functions V . The difficulty of bounding the offset is overcome. Polynomial kernels and Gaussian kernels are used to demonstrate the main results. The choice of the regularization parameter plays an important role in our analysis.",
"title": ""
}
] | [
{
"docid": "19607c362f07ebe0238e5940fefdf03f",
"text": "This paper presents an approach for generating photorealistic video sequences of dynamically varying facial expressions in human-agent interactions. To this end, we study human-human interactions to model the relationship and influence of one individual's facial expressions in the reaction of the other. We introduce a two level optimization of generative adversarial models, wherein the first stage generates a dynamically varying sequence of the agent's face sketch conditioned on facial expression features derived from the interacting human partner. This serves as an intermediate representation, which is used to condition a second stage generative model to synthesize high-quality video of the agent face. Our approach uses a novel L1 regularization term computed from layer features of the discriminator, which are integrated with the generator objective in the GAN model. Session constraints are also imposed on video frame generation to ensure appearance consistency between consecutive frames. We demonstrated that our model is effective at generating visually compelling facial expressions. Moreover, we quantitatively showed that agent facial expressions in the generated video clips reflect valid emotional reactions to behavior of the human partner.",
"title": ""
},
{
"docid": "57a23f68303a3694e4e6ba66e36f7015",
"text": "OBJECTIVE\nTwo studies using cross-sectional designs explored four possible mechanisms by which loneliness may have deleterious effects on health: health behaviors, cardiovascular activation, cortisol levels, and sleep.\n\n\nMETHODS\nIn Study 1, we assessed autonomic activity, salivary cortisol levels, sleep quality, and health behaviors in 89 undergraduate students selected based on pretests to be among the top or bottom quintile in feelings of loneliness. In Study 2, we assessed blood pressure, heart rate, salivary cortisol levels, sleep quality, and health behaviors in 25 older adults whose loneliness was assessed at the time of testing at their residence.\n\n\nRESULTS\nTotal peripheral resistance was higher in lonely than nonlonely participants, whereas cardiac contractility, heart rate, and cardiac output were higher in nonlonely than lonely participants. Lonely individuals also reported poorer sleep than nonlonely individuals. Study 2 indicated greater age-related increases in blood pressure and poorer sleep quality in lonely than nonlonely older adults. Mean salivary cortisol levels and health behaviors did not differ between groups in either study.\n\n\nCONCLUSIONS\nResults point to two potentially orthogonal predisease mechanisms that warrant special attention: cardiovascular activation and sleep dysfunction. Health behavior and cortisol regulation, however, may require more sensitive measures and large sample sizes to discern their roles in loneliness and health.",
"title": ""
},
{
"docid": "e4892dfe4da663c4044a78a8892010a8",
"text": "Turkey has been undertaking many projects to integrate Information and Communication Technology (ICT) sources into practice in the teaching-learning process in educational institutions. This research study sheds light on the use of ICT tools in primary schools in the social studies subject area, by considering various variables which affect the success of the implementation of the use of these tools. A survey was completed by 326 teachers who teach fourth and fifth grade at primary level. The results showed that although teachers are willing to use ICT resources and are aware of the existing potential, they are facing problems in relation to accessibility to ICT resources and lack of in-service training opportunities.",
"title": ""
},
{
"docid": "0f2caa9b91c2c180cbfbfcc25941f78e",
"text": "BACKGROUND\nSevere mitral annular calcification causing degenerative mitral stenosis (DMS) is increasingly encountered in patients undergoing mitral and aortic valve interventions. However, its clinical profile and natural history and the factors affecting survival remain poorly characterized. The goal of this study was to characterize the factors affecting survival in patients with DMS.\n\n\nMETHODS\nAn institutional echocardiographic database was searched for patients with DMS, defined as severe mitral annular calcification without commissural fusion and a mean transmitral diastolic gradient of ≥2 mm Hg. This resulted in a cohort of 1,004 patients. Survival was analyzed as a function of clinical, pharmacologic, and echocardiographic variables.\n\n\nRESULTS\nThe patient characteristics were as follows: mean age, 73 ± 14 years; 73% women; coronary artery disease in 49%; and diabetes mellitus in 50%. The 1- and 5-year survival rates were 78% and 47%, respectively, and were slightly worse with higher DMS grades (P = .02). Risk factors for higher mortality included greater age (P < .0001), atrial fibrillation (P = .0009), renal insufficiency (P = .004), mitral regurgitation (P < .0001), tricuspid regurgitation (P < .0001), elevated right atrial pressure (P < .0001), concomitant aortic stenosis (P = .02), and low serum albumin level (P < .0001). Adjusted for propensity scores, use of renin-angiotensin system blockers (P = .02) or statins (P = .04) was associated with better survival, and use of digoxin was associated with higher mortality (P = .007).\n\n\nCONCLUSIONS\nPrognosis in patients with DMS is poor, being worse in the aged and those with renal insufficiency, atrial fibrillation, and other concomitant valvular lesions. Renin-angiotensin system blockers and statins may confer a survival benefit, and digoxin use may be associated with higher mortality in these patients.",
"title": ""
},
{
"docid": "073ea28d4922c2d9c1ef7945ce4aa9e2",
"text": "The three major solutions for increasing the nominal performance of a CPU are: multiplying the number of cores per socket, expanding the embedded cache memories and use multi-threading to reduce the impact of the deep memory hierarchy. Systems with tens or hundreds of hardware threads, all sharing a cache coherent UMA or NUMA memory space, are today the de-facto standard. While these solutions can easily provide benefits in a multi-program environment, they require recoding of applications to leverage the available parallelism. Threads must synchronize and exchange data, and the overall performance is heavily in influenced by the overhead added by these mechanisms, especially as developers try to exploit finer grain parallelism to be able to use all available resources.",
"title": ""
},
{
"docid": "3913e29aab9b4447edfd4f34a16c38ed",
"text": "This review compares the biological and physiological function of Sigma receptors [σRs] and their potential therapeutic roles. Sigma receptors are widespread in the central nervous system and across multiple peripheral tissues. σRs consist of sigma receptor one (σ1R) and sigma receptor two (σ2R) and are expressed in numerous regions of the brain. The sigma receptor was originally proposed as a subtype of opioid receptors and was suggested to contribute to the delusions and psychoses induced by benzomorphans such as SKF-10047 and pentazocine. Later studies confirmed that σRs are non-opioid receptors (not an µ opioid receptor) and play a more diverse role in intracellular signaling, apoptosis and metabolic regulation. σ1Rs are intracellular receptors acting as chaperone proteins that modulate Ca2+ signaling through the IP3 receptor. They dynamically translocate inside cells, hence are transmembrane proteins. The σ1R receptor, at the mitochondrial-associated endoplasmic reticulum membrane, is responsible for mitochondrial metabolic regulation and promotes mitochondrial energy depletion and apoptosis. Studies have demonstrated that they play a role as a modulator of ion channels (K+ channels; N-methyl-d-aspartate receptors [NMDAR]; inositol 1,3,5 triphosphate receptors) and regulate lipid transport and metabolism, neuritogenesis, cellular differentiation and myelination in the brain. σ1R modulation of Ca2+ release, modulation of cardiac myocyte contractility and may have links to G-proteins. It has been proposed that σ1Rs are intracellular signal transduction amplifiers. This review of the literature examines the mechanism of action of the σRs, their interaction with neurotransmitters, pharmacology, location and adverse effects mediated through them.",
"title": ""
},
{
"docid": "a33f862d0b7dfde7b9f18aa193db9acf",
"text": "Phytoremediation is an important process in the removal of heavy metals and contaminants from the soil and the environment. Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Phytoremediation in phytoextraction is a major technique. In this process is the use of plants or algae to remove contaminants in the soil, sediment or water in the harvesting of plant biomass. Heavy metal is generally known set of elements with atomic mass (> 5 gcm -3), particularly metals such as exchange of cadmium, lead and mercury. Between different pollutant cadmium (Cd) is the most toxic and plant and animal heavy metals. Mustard (Brassica juncea L.) and Sunflower (Helianthus annuus L.) are the plant for the production of high biomass and rapid growth, and it seems that the appropriate species for phytoextraction because it can compensate for the low accumulation of cadmium with a much higher biomass yield. To use chelators, such as acetic acid, ethylene diaminetetraacetic acid (EDTA), and to increase the solubility of metals in the soil to facilitate easy availability indiscernible and the absorption of the plant from root leg in vascular plants. *Corresponding Author: Awais Shakoor awais.shakoor22@gmail.com Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 10, No. 3, p. 88-98, 2017 http://www.innspub.net J. Bio. Env. Sci. 2017 89 | Shakoor et al. Introduction Phytoremediation consists of Greek and words of \"station\" and Latin remedium plants, which means \"rebalancing\" describes the treatment of environmental problems treatment (biological) through the use of plants that mitigate the environmental problem without digging contaminated materials and disposed of elsewhere. Controlled by the plant interactions with groundwater and organic and inorganic contaminated materials in specific locations to achieve therapeutic targets molecules site application (Landmeyer, 2011). Phytoremediation is the use of green plants to remove contaminants from the environment or render them harmless. The technology that uses plants to\" green space \"of heavy metals in the soil through the roots. While vacuum cleaners and you should be able to withstand and survive high levels of heavy metals in the soil unique plants (Baker, 2000). The main result in increasing the population and more industrialization are caused water and soil contamination that is harmful for environment as well as human health. In the whole world, contamination in the soil by heavy metals has become a very serious issue. So, removal of these heavy metals from the soil is very necessary to protect the soil and human health. Both inorganic and organic contaminants, like petroleum, heavy metals, agricultural waste, pesticide and fertilizers are the main source that deteriorate the soil health (Chirakkara et al., 2016). Heavy metals have great role in biological system, so we can divide into two groups’ essentials and non essential. Those heavy metals which play a vital role in biochemical and physiological function in some living organisms are called essential heavy metals, like zinc (Zn), nickel (Ni) and cupper (Cu) (Cempel and Nikel, 2006). In some living organisms, heavy metals don’t play any role in biochemical as well as physiological functions are called non essential heavy metals, such as mercury (Hg), lead (Pb), arsenic (As), and Cadmium (Cd) (Dabonne et al., 2010). Cadmium (Cd) is consider as a non essential heavy metal that is more toxic at very low concentration as compare to other non essential heavy metals. It is toxic to plant, human and animal health. Cd causes serious diseases in human health through the food chain (Rafiq et al., 2014). So, removal of Cd from the soil is very important problem to overcome these issues (Neilson and Rajakaruna, 2015). Several methods are used to remove the Cd from the soil, such as physical, chemical and physiochemical to increase the soil pH (Liu et al., 2015). The main source of Cd contamination in the soil and environment is automobile emissions, batteries and commercial fertilizers (Liu et al., 2015). Phytoremediation is a promising technique that is used in removing the heavy metals form the soil (Ma et al., 2011). Plants update the heavy metals through the root and change the soil properties which are helpful in increasing the soil fertility (Mench et al., 2009). Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Plants also help prevent wind and rain, groundwater and implementation of pollution off site to other areas. Phytoremediation works best in locations with low to moderate amounts of pollution. Plants absorb harmful chemicals from the soil when the roots take in water and nutrients from contaminated soils, streams and groundwater. Once inside the plant and chemicals can be stored in the roots, stems, or leaves. Change of less harmful chemicals within the plant. Or a change in the gases that are released into the air as a candidate plant Agency (US Environmental Protection, 2001). Phytoremediation is the direct use of living green plants and minutes to stabilize or reduce pollution in soil, sludge, sediment, surface water or groundwater bodies with low concentrations of pollutants a large clean space and shallow depths site offers favorable treatment plant (associated with US Environmental Protection Agency 0.2011) circumstances. Phytoremediation is the use of plants for the treatment of contaminated soil sites and sediments J. Bio. Env. Sci. 2017 90 | Shakoor et al. and water. It is best applied at sites of persistent organic pollution with shallow, nutrient, or metal. Phytoremediation is an emerging technology for contaminated sites is attractive because of its low cost and versatility (Schnoor, 1997). Contaminated soils on the site using the processing plants. Phytoremediation is a plant that excessive accumulation of metals in contaminated soils in growth (National Research Council, 1997). Phytoremediation to facilitate the concentration of pollutants in contaminated soil, water or air is composed, and plants able to contain, degrade or eliminate metals, pesticides, solvents, explosives, crude oil and its derivatives, and other contaminants in the media that contain them. Phytoremediation have several techniques and these techniques depend on different factors, like soil type, contaminant type, soil depth and level of ground water. Special operation situations and specific technology applied at the contaminated site (Hyman and Dupont 2001). Techniques of phytoremediation Different techniques are involved in phytoremediation, such as phytoextraction, phytostabilisation, phytotransformation, phytostimulation, phytovolatilization, and rhizofiltration. Phytoextraction Phytoextraction is also called phytoabsorption or phytoaccumulation, in this technique heavy metals are removed by up taking through root form the water and soil environment, and accumulated into the shoot part (Rafati et al., 2011). Phytostabilisation Phytostabilisation is also known as phytoimmobilization. In this technique different type of plants are used for stabilization the contaminants from the soil environment (Ali et al., 2013). By using this technique, the bioavailability and mobility of the different contaminants are reduced. So, this technique is help to avoiding their movement into food chain as well as into ground water (Erakhrumen, 2007). Nevertheless, Phytostabilisation is the technique by which movement of heavy metals can be stop but its not permanent solution to remove the contamination from the soil. Basically, phytostabilisation is the management approach for inactivating the potential of toxic heavy metals form the soil environment contaminants (Vangronsveld et al., 2009).",
"title": ""
},
{
"docid": "e8ff6978cae740152a918284ebe49fe3",
"text": "Cross-lingual sentiment classification aims to predict the sentiment orientation of a text in a language (named as the target language) with the help of the resources from another language (named as the source language). However, current cross-lingual performance is normally far away from satisfaction due to the huge difference in linguistic expression and social culture. In this paper, we suggest to perform active learning for cross-lingual sentiment classification, where only a small scale of samples are actively selected and manually annotated to achieve reasonable performance in a short time for the target language. The challenge therein is that there are normally much more labeled samples in the source language than those in the target language. This makes the small amount of labeled samples from the target language flooded in the aboundance of labeled samples from the source language, which largely reduces their impact on cross-lingual sentiment classification. To address this issue, we propose a data quality controlling approach in the source language to select high-quality samples from the source language. Specifically, we propose two kinds of data quality measurements, intraand extra-quality measurements, from the certainty and similarity perspectives. Empirical studies verify the appropriateness of our active learning approach to cross-lingual sentiment classification.",
"title": ""
},
{
"docid": "01be341cfcfe218896c795d769c66e69",
"text": "This letter proposes a multi-user uplink channel estimation scheme for mmWave massive MIMO over frequency selective fading (FSF) channels. Specifically, by exploiting the angle-domain structured sparsity of mmWave FSF channels, a distributed compressive sensing-based channel estimation scheme is proposed. Moreover, by using the grid matching pursuit strategy with adaptive measurement matrix, the proposed algorithm can solve the power leakage problem caused by the continuous angles of arrival or departure. Simulation results verify the good performance of the proposed solution.",
"title": ""
},
{
"docid": "045162dbad88cd4d341eed216779bb9b",
"text": "BACKGROUND\nCrocodile oil and its products are used as ointments for burns and scalds in traditional medicines. A new ointment formulation - crocodile oil burn ointment (COBO) was developed to provide more efficient wound healing activity. The purpose of the study was to evaluate the burn healing efficacy of this new formulation by employing deep second-degree burns in a Wistar rat model. The analgesic and anti-inflammatory activities of COBO were also studied to provide some evidences for its further use.\n\n\nMATERIALS AND METHODS\nThe wound healing potential of this formulation was evaluated by employing a deep second-degree burn rat model and the efficiency was comparatively assessed against a reference ointment - (1% wt/wt) silver sulfadiazine (SSD). After 28 days, the animals were euthanized and the wounds were removed for transversal and longitudinal histological studies. Acetic acid-induced writhing in mice was used to evaluate the analgesic activity and its anti-inflammatory activity was observed in xylene -induced edema in mice.\n\n\nRESULTS\nCOBO enhanced the burn wound healing (20.5±1.3 d) as indicated by significant decrease in wound closure time compared with the burn control (25.0±2.16 d) (P<0.01). Hair follicles played an importance role in the physiological functions of the skin, and their growth in the wound could be revealed for the skin regeneration situation. Histological results showed that the hair follicles were well-distributed in the post-burn skin of COBO treatment group, and the amounts of total, active, primary and secondary hair follicles in post-burn 28-day skin of COBO treatment groups were more than those in burn control and SSD groups. On the other hand, the analgesic and anti-inflammatory activity of COBO were much better than those of control group, while they were very close to those of moist exposed burn ointment (MEBO).\n\n\nCONCLUSIONS\nCOBO accelerated wound closure, reduced inflammation, and had analgesic effects compared with SSD in deep second degree rat burn model. These findings suggest that COBO would be a potential therapy for treating human burns. Abbreviations: COBO, crocodile oil burn ointment; SSD, silver sulfadiazine; MEBO, moist exposed burn ointment; TCM, traditional Chinese medicine; CHM, Chinese herbal medicine; GC-MS, gas chromatography-mass spectrometry.",
"title": ""
},
{
"docid": "162bfca981e89b1b3174a030ad8f64c6",
"text": "This paper addresses the consensus problem of multiagent systems with a time-invariant communication topology consisting of general linear node dynamics. A distributed observer-type consensus protocol based on relative output measurements is proposed. A new framework is introduced to address in a unified way the consensus of multiagent systems and the synchronization of complex networks. Under this framework, the consensus of multiagent systems with a communication topology having a spanning tree can be cast into the stability of a set of matrices of the same low dimension. The notion of consensus region is then introduced and analyzed. It is shown that there exists an observer-type protocol solving the consensus problem and meanwhile yielding an unbounded consensus region if and only if each agent is both stabilizable and detectable. A multistep consensus protocol design procedure is further presented. The consensus with respect to a time-varying state and the robustness of the consensus protocol to external disturbances are finally discussed. The effectiveness of the theoretical results is demonstrated through numerical simulations, with an application to low-Earth-orbit satellite formation flying.",
"title": ""
},
{
"docid": "0f5ad4bd916a0115215adc938d46bf2c",
"text": "We propose a new paradigm to effortlessly get a portable geometric Level Of Details (LOD) for a point cloud inside a Point Cloud Server. The point cloud is divided into groups of points (patch), then each patch is reordered (MidOc ordering) so that reading points following this order provides more and more details on the patch. This LOD have then multiple applications: point cloud size reduction for visualisation (point cloud streaming) or speeding of slow algorithm, fast density peak detection and correction as well as safeguard for methods that may be sensible to density variations. The LOD method also embeds information about the sensed object geometric nature, and thus can be used as a crude multi-scale dimensionality descriptor, enabling fast classification and on-the-fly filtering for basic classes.",
"title": ""
},
{
"docid": "dedef832d8b54cac137277afe9cd27eb",
"text": "The number of strands to minimize loss in a litz-wire transformer winding is determined. With fine stranding, the ac resistance factor decreases, but dc resistance increases because insulation occupies more of the window area. A power law to model insulation thickness is combined with standard analysis of proximity-effect losses.",
"title": ""
},
{
"docid": "228cd0696e0da6f18a22aa72f009f520",
"text": "Modern Convolutional Neural Networks (CNN) are extremely powerful on a range of computer vision tasks. However, their performance may degrade when the data is characterised by large intra-class variability caused by spatial transformations. The Spatial Transformer Network (STN) is currently the method of choice for providing CNNs the ability to remove those transformations and improve performance in an end-to-end learning framework. In this paper, we propose Densely Fused Spatial Transformer Network (DeSTNet), which, to our best knowledge, is the first dense fusion pattern for combining multiple STNs. Specifically, we show how changing the connectivity pattern of multiple STNs from sequential to dense leads to more powerful alignment modules. Extensive experiments on three benchmarks namely, MNIST, GTSRB, and IDocDB show that the proposed technique outperforms related state-of-the-art methods (i.e., STNs and CSTNs) both in terms of accuracy and robustness.",
"title": ""
},
{
"docid": "f3090b5de9f3f1c29f261a2ef86bac61",
"text": "The K-means algorithm is a popular data-clustering algorithm. However, one of its drawbacks is the requirement for the number of clusters, K, to be specified before the algorithm is applied. This paper first reviews existing methods for selecting the number of clusters for the algorithm. Factors that affect this selection are then discussed and a new measure to assist the selection is proposed. The paper concludes with an analysis of the results of using the proposed measure to determine the number of clusters for the K-means algorithm for different data sets.",
"title": ""
},
{
"docid": "e870f2fe9a26b241bdeca882b6186169",
"text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd recommender systems handbook as the choice of reading, you can find here.",
"title": ""
},
{
"docid": "a6f1480f52d142a013bb88a92e47b0d7",
"text": "An isolated switched high step up boost DC-DC converter is discussed in this paper. The main objective of this paper is to step up low voltage to very high voltage. This paper mainly initiates at boosting a 30V DC into 240V DC. The discussed converter benefits from the continuous input current. Usually, step-up DC-DC converters are suitable for input whose voltage level is very low. The circuital design comprises of four main stages. Firstly, an impedance network which is used to boost the low input voltage. Secondly a switching network which is used to boost the input voltage then an isolation transformer which is used to provide higher boosting ability and finally a voltage multiplier rectifier which is used to rectify the secondary voltage of the transformer. No switching deadtime is required, which increases the reliability of the converter. Comparing with the existing step-up topologies indicates that this new design is hybrid, portable, higher power density and the size of the whole system is also reduced. The principles as well as operations were analysed and experimentally worked out, which provides a higher efficiency. KeywordImpedance Network, Switching Network, Isolation Transformer, Voltage Multiplier Rectifier, MicroController, DC-DC Boost Converter __________________________________________________________________________________________________",
"title": ""
},
{
"docid": "1c7251c55cf0daea9891c8a522bbd3ec",
"text": "The role of computers in the modern office has divided ouractivities between virtual interactions in the realm of thecomputer and physical interactions with real objects within thetraditional office infrastructure. This paper extends previous workthat has attempted to bridge this gap, to connect physical objectswith virtual representations or computational functionality, viavarious types of tags. We discuss a variety of scenarios we haveimplemented using a novel combination of inexpensive, unobtrusiveand easy to use RFID tags, tag readers, portable computers andwireless networking. This novel combination demonstrates theutility of invisibly, seamlessly and portably linking physicalobjects to networked electronic services and actions that arenaturally associated with their form.",
"title": ""
},
{
"docid": "1ad6efaaf4e3201d59c62cd3dbcc01a6",
"text": "•Combine Bayesian change point detection with Gaussian Processes to define a nonstationary time series model. •Central aim is to react to underlying regime changes in an online manner. •Able to integrate out all latent variables and optimize hyperparameters sequentially. •Explore three alternative ways of augmenting GP models to handle nonstationarity (GPTS, ARGPCP and NSGP – see below). •A Bayesian approach (BOCPD) for online change point detection was introduced in [1]. •BOCPD introduces a latent variable representing the run length at time t and adapts predictions via integrating out the run length. •BOCPD has two key ingredients: –Any model which can construct a predictive density for future observations, in particular, p(xt|x(t−τ ):(t−1), θm), i.e., the “underlying predictive model” (UPM). –A hazard function H(r|θh) which encodes our prior belief in a change point occuring after observing a run length r.",
"title": ""
},
{
"docid": "cc05dca89bf1e3f53cf7995e547ac238",
"text": "Ensembles of randomized decision trees, known as Random Forests, have become a valuable machine learning tool for addressing many computer vision problems. Despite their popularity, few works have tried to exploit contextual and structural information in random forests in order to improve their performance. In this paper, we propose a simple and effective way to integrate contextual information in random forests, which is typically reflected in the structured output space of complex problems like semantic image labelling. Our paper has several contributions: We show how random forests can be augmented with structured label information and be used to deliver structured low-level predictions. The learning task is carried out by employing a novel split function evaluation criterion that exploits the joint distribution observed in the structured label space. This allows the forest to learn typical label transitions between object classes and avoid locally implausible label configurations. We provide two approaches for integrating the structured output predictions obtained at a local level from the forest into a concise, global, semantic labelling. We integrate our new ideas also in the Hough-forest framework with the view of exploiting contextual information at the classification level to improve the performance on the task of object detection. Finally, we provide experimental evidence for the effectiveness of our approach on different tasks: Semantic image labelling on the challenging MSRCv2 and CamVid databases, reconstruction of occluded handwritten Chinese characters on the Kaist database and pedestrian detection on the TU Darmstadt databases.",
"title": ""
}
] | scidocsrr |
50e449de1faa3af65b198a0fb6353cdd | Distinct balance of excitation and inhibition in an interareal feedforward and feedback circuit of mouse visual cortex. | [
{
"docid": "1f364472fcf7da9bfc18d9bb8a521693",
"text": "The Cre/lox system is widely used in mice to achieve cell-type-specific gene expression. However, a strong and universally responding system to express genes under Cre control is still lacking. We have generated a set of Cre reporter mice with strong, ubiquitous expression of fluorescent proteins of different spectra. The robust native fluorescence of these reporters enables direct visualization of fine dendritic structures and axonal projections of the labeled neurons, which is useful in mapping neuronal circuitry, imaging and tracking specific cell populations in vivo. Using these reporters and a high-throughput in situ hybridization platform, we are systematically profiling Cre-directed gene expression throughout the mouse brain in several Cre-driver lines, including new Cre lines targeting different cell types in the cortex. Our expression data are displayed in a public online database to help researchers assess the utility of various Cre-driver lines for cell-type-specific genetic manipulation.",
"title": ""
}
] | [
{
"docid": "dce51c1fed063c9d9776fce998209d25",
"text": "While classical kernel-based learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Lankriet et al. (2004) considered conic combinations of kernel matrices for classification, leading to a convex quadratically constrained quadratic program. We show that it can be rewritten as a semi-infinite linear program that can be efficiently solved by recycling the standard SVM implementations. Moreover, we generalize the formulation and our method to a larger class of problems, including regression and one-class classification. Experimental results show that the proposed algorithm works for hundred thousands of examples or hundreds of kernels to be combined, and helps for automatic model selection, improving the interpretability of the learning result. In a second part we discuss general speed up mechanism for SVMs, especially when used with sparse feature maps as appear for string kernels, allowing us to train a string kernel SVM on a 10 million real-world splice dataset from computational biology. We integrated Multiple Kernel Learning in our Machine Learning toolbox SHOGUN for which the source code is publicly available at http://www.fml.tuebingen.mpg.de/raetsch/projects/shogun.",
"title": ""
},
{
"docid": "85da43096d4ef2dcb3f8f9ae9ea2db35",
"text": "We present an approach that combines automatic features learned by convolutional neural networks (CNN) and handcrafted features computed by the bag-of-visual-words (BOVW) model in order to achieve state-of-the-art results in facial expression recognition. To obtain automatic features, we experiment with multiple CNN architectures, pretrained models and training procedures, e.g. Dense-SparseDense. After fusing the two types of features, we employ a local learning framework to predict the class label for each test image. The local learning framework is based on three steps. First, a k-nearest neighbors model is applied for selecting the nearest training samples for an input test image. Second, a one-versus-all Support Vector Machines (SVM) classifier is trained on the selected training samples. Finally, the SVM classifier is used for predicting the class label only for the test image it was trained for. Although we used local learning in combination with handcrafted features in our previous work, to the best of our knowledge, local learning has never been employed in combination with deep features. The experiments on the 2013 Facial Expression Recognition (FER) Challenge data set and the FER+ data set demonstrate that our approach achieves state-ofthe-art results. With a top accuracy of 75.42% on the FER 2013 data set and 87.76% on the FER+ data set, we surpass all competition by more than 2% on both data sets.",
"title": ""
},
{
"docid": "f09f5d7e0f75d4b0fdbd8c40860c4473",
"text": "Purpose – The purpose of this paper is to examine the diffusion of a popular Korean music video on the video-sharing web site YouTube. It applies a webometric approach in the diffusion of innovations framework to study three elements of diffusion in a Web 2.0 environment: users, user-to-user relationship and user-generated comment. Design/methodology/approach – The webometric approach combines profile analyses, social network analyses, semantic and sentiment analyses. Findings – The results show that male users in the US played a dominant role in the early-stage diffusion. The dominant users represented the innovators and early adopters in the evaluation stage of the diffusion, and they engaged in continuous discussions about the cultural origin of the video and expressed criticisms. Overall, the discussion between users varied according to their gender, age, and cultural background. Specifically, male users were more interactive than female users, and users in countries culturally similar to Korea were more likely to express favourable attitudes toward the video. Originality/value – The study provides a webometric approach to examine the Web 2.0-based social system in the early-stage global diffusion of cultural offerings. This approach connects the diffusion of innovations framework to the new context of Web 2.0-based diffusion.",
"title": ""
},
{
"docid": "c57a689627f1af0bf872e4d0c5953a28",
"text": "Image diffusion plays a fundamental role for the task of image denoising. The recently proposed trainable nonlinear reaction diffusion (TNRD) model defines a simple but very effective framework for image denoising. However, as the TNRD model is a local model, whose diffusion behavior is purely controlled by information of local patches, it is prone to create artifacts in the homogenous regions and over-smooth highly textured regions, especially in the case of strong noise levels. Meanwhile, it is widely known that the non-local self-similarity (NSS) prior stands as an effective image prior for image denoising, which has been widely exploited in many non-local methods. In this work, we are highly motivated to embed the NSS prior into the TNRD model to tackle its weaknesses. In order to preserve the expected property that end-to-end training remains available, we exploit the NSS prior by defining a set of non-local filters, and derive our proposed trainable non-local reaction diffusion (TNLRD) model for image denoising. Together with the local filters and influence functions, the non-local filters are learned by employing loss-specific training. The experimental results show that the trained TNLRD model produces visually plausible recovered images with more textures and less artifacts, compared to its local versions. Moreover, the trained TNLRD model can achieve strongly competitive performance to recent state-of-the-art image denoising methods in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM).",
"title": ""
},
{
"docid": "62a8548527371acb657d9552ab41d699",
"text": "This paper proposes a novel dynamic gait of locomotion for hexapedal robots which enables them to crawl forward, backward, and rotate using a single actuator. The gait exploits the compliance difference between the two sides of the tripods, to generate clockwise or counter clockwise rotation by controlling the acceleration of the robot. The direction of turning depends on the configuration of the legs -tripod left of right- and the direction of the acceleration. Alternating acceleration in successive steps allows for continuous rotation in the desired direction. An analysis of the locomotion is presented as a function of the mechanical properties of the robot and the contact with the surface. A numerical simulation was performed for various conditions of locomotion. The results of the simulation and analysis were compared and found to be in excellent match.",
"title": ""
},
{
"docid": "7531be3af1285a4c1c0b752d1ee45f52",
"text": "Given an undirected graph with weight for each vertex, the maximum weight clique problem is to find the clique of the maximum weight. Östergård proposed a fast exact algorithm for solving this problem. We show his algorithm is not efficient for very dense graphs. We propose an exact algorithm for the problem, which is faster than Östergård’s algorithm in case the graph is dense. We show the efficiency of our algorithm with some experimental results.",
"title": ""
},
{
"docid": "9584d194e05359ef5123c6b3d71e1c75",
"text": "A bloom filter is a randomized data structure for performing approximate membership queries. It is being increasingly used in networking applications ranging from security to routing in peer to peer networks. In order to meet a given false positive rate, the amount of memory required by a bloom filter is a function of the number of elements in the set. We consider the problem of minimizing the memory requirements in cases where the number of elements in the set is not known in advance but the distribution or moment information of the number of elements is known. We show how to exploit such information to minimize the expected amount of memory required for the filter. We also show how this approach can significantly reduce memory requirement when bloom filters are constructed for multiple sets in parallel. We show analytically as well as experiments on synthetic and trace data that our approach leads to one to three orders of magnitude reduction in memory compared to a standard bloom filter.",
"title": ""
},
{
"docid": "3dcf6c5e59d4472c0b0e25c96b992f3e",
"text": "This paper presents the design of Ultra Wideband (UWB) microstrip antenna consisting of a circular monopole patch antenna with 3 block stepped (wing). The antenna design is an improvement from previous research and it is simulated using CST Microwave Studio software. This antenna was designed on Rogers 5880 printed circuit board (PCB) with overall size of 26 × 40 × 0.787 mm3 and dielectric substrate, εr = 2.2. The performance of the designed antenna was analyzed in term of bandwidth, gain, return loss, radiation pattern, and verified through actual measurement of the fabricated antenna. 10 dB return loss bandwidth from 3.37 GHz to 10.44 GHz based on 50 ohm characteristic impedance for the transmission line model was obtained.",
"title": ""
},
{
"docid": "501d6ec6163bc8b93fd728412a3e97f3",
"text": "This short paper describes our ongoing research on Greenhouse a zero-positive machine learning system for time-series anomaly detection.",
"title": ""
},
{
"docid": "bea270701da3f8d47b19dc7976000562",
"text": "Spatial information captured from optical remote sensors on board unmanned aerial vehicles (UAVs) has great potential in the automatic surveillance of electrical power infrastructure. For an automatic vision based power line inspection system, detecting power lines from cluttered background an important and challenging task. In this paper, we propose a knowledge-based power line detection method for a vision based UAV surveillance and inspection system. A PCNN filter is developed to remove background noise from the images prior to the Hough transform being employed to detect straight lines. Finally knowledge based line clustering is applied to refine the detection results. The experiment on real image data captured from a UAV platform demonstrates that the proposed approach is effective.",
"title": ""
},
{
"docid": "152c11ef8449d53072bbdb28432641fa",
"text": "Flexible intelligent electronic devices (IEDs) are highly desirable to support free allocation of function to IED by means of software reconfiguration without any change of hardware. The application of generic hardware platforms and component-based software technology seems to be a good solution. Due to the advent of IEC 61850, generic hardware platforms with a standard communication interface can be used to implement different kinds of functions with high flexibility. The remaining challenge is the unified function model that specifies various software components with appropriate granularity and provides a framework to integrate them efficiently. This paper proposes the function-block (FB)-based function model for flexible IEDs. The standard FBs are established by combining the IEC 61850 model and the IEC 61499 model. The design of a simplified distance protection IED using standard FBs is described and investigated. The testing results of the prototype system in MATLAB/Simulink demonstrate the feasibility and flexibility of FB-based IEDs.",
"title": ""
},
{
"docid": "d580021d1e7cfe44e58dbace3d5c7bee",
"text": "We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents' social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans' responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies.",
"title": ""
},
{
"docid": "6c00347ffa60b09692bbae45a0c01fc1",
"text": "OBJECTIVES:Eosinophilic gastritis (EG), defined by histological criteria as marked eosinophilia in the stomach, is rare, and large studies in children are lacking. We sought to describe the clinical, endoscopic, and histopathological features of EG, assess for any concurrent eosinophilia at other sites of the gastrointestinal (GI) tract, and evaluate response to dietary and pharmacological therapies.METHODS:Pathology files at our medical center were searched for histological eosinophilic gastritis (HEG) with ≥70 gastric eosinophils per high-power field in children from 2005 to 2011. Pathology slides were evaluated for concurrent eosinophilia in the esophagus, duodenum, and colon. Medical records were reviewed for demographic characteristics, symptoms, endoscopic findings, comorbidities, and response to therapy.RESULTS:Thirty children with severe gastric eosinophilia were identified, median age 7.5 years, 14 of whom had both eosinophilia limited to the stomach and clinical symptoms, fulfilling the clinicopathological definition of EG. Symptoms and endoscopic features were highly variable. History of atopy and food allergies was common. A total of 22% had protein-losing enteropathy (PLE). Gastric eosinophilia was limited to the fundus in two patients. Many patients had associated eosinophilic esophagitis (EoE, 43%) and 21% had eosinophilic enteritis. Response to dietary restriction therapy was high (82% clinical response and 78% histological response). Six out of sixteen patients had persistent EoE despite resolution of their gastric eosinophilia; two children with persistent HEG post therapy developed de novo concurrent EoE.CONCLUSIONS:HEG in children can be present in the antrum and/or fundus. Symptoms and endoscopic findings vary, highlighting the importance of biopsies for diagnosis. HEG is associated with PLE, and with eosinophilia elsewhere in the GI tract including the esophagus. The disease is highly responsive to dietary restriction therapies in children, implicating an allergic etiology. Associated EoE is more resistant to therapy.",
"title": ""
},
{
"docid": "f018db7f20245180d74e4eb07b99e8d3",
"text": "Particle filters can become quite inefficient when being applied to a high-dimensional state space since a prohibitively large number of samples may be required to approximate the underlying density functions with desired accuracy. In this paper, by proposing an adaptive Rao-Blackwellized particle filter for tracking in surveillance, we show how to exploit the analytical relationship among state variables to improve the efficiency and accuracy of a regular particle filter. Essentially, the distributions of the linear variables are updated analytically using a Kalman filter which is associated with each particle in a particle filtering framework. Experiments and detailed performance analysis using both simulated data and real video sequences reveal that the proposed method results in more accurate tracking than a regular particle filter",
"title": ""
},
{
"docid": "0627ea85ea93b56aef5ef378026bc2fc",
"text": "This paper presents a resonant inductive coupling wireless power transfer (RIC-WPT) system with a class-DE and class-E rectifier along with its analytical design procedure. By using the class-DE inverter as a transmitter and the class-E rectifier as a receiver, the designed WPT system can achieve a high power-conversion efficiency because of the class-E ZVS/ZDS conditions satisfied in both the inverter and the rectifier. In the simulation results, the system achieved 79.0 % overall efficiency at 5 W (50 Ω) output power, coupling coefficient 0.072, and 1 MHz operating frequency. Additionally, the simulation results showed good agreement with the design specifications, which indicates the validity of the design procedure.",
"title": ""
},
{
"docid": "da698cfca4e5bbc80fbbab5e8f30e22c",
"text": "This paper base on the application of the Internet of things in the logistics industry as the breakthrough point, to investigate the identification technology, network structure, middleware technology support and so on, which is used in the Internet of things, also to analyze the bottleneck of technology that the Internet of things could meet. At last, summarize the Internet of things’ application in the logistics industry with the intelligent port architecture.",
"title": ""
},
{
"docid": "bbea93884f1f0189be1061939783a1c0",
"text": "Severe adolescent female stress urinary incontinence (SAFSUI) can be defined as female adolescents between the ages of 12 and 17 years complaining of involuntary loss of urine multiple times each day during normal activities or sneezing or coughing rather than during sporting activities. An updated review of its likely prevalence, etiology, and management is required. The case of a 15-year-old female adolescent presenting with a 7-year history of SUI resistant to antimuscarinic medications and 18 months of intensive physiotherapy prompted this review. Issues of performing physical and urodynamic assessment at this young age were overcome in order to achieve the diagnosis of urodynamic stress incontinence (USI). Failed use of tampons was followed by the insertion of (retropubic) suburethral synthetic tape (SUST) under assisted local anesthetic into tissues deemed softer than the equivalent for an adult female. Whereas occasional urinary incontinence can occur in between 6 % and 45 % nulliparous adolescents, the prevalence of non‐neurogenic SAFSUI is uncertain but more likely rare. Risk factors for the occurrence of more severe AFSUI include obesity, athletic activities or high-impact training, and lung diseases such as cystic fibrosis (CF). This first reported use of a SUST in a patient with SAFSUI proved safe and completely curative. Artificial urinary sphincters, periurethral injectables and pubovaginal slings have been tried previously in equivalent patients. SAFSUI is a relatively rare but physically and emotionally disabling presentation. Multiple conservative options may fail, necessitating surgical management; SUST can prove safe and effective.",
"title": ""
},
{
"docid": "b1c62a59a8ce3dd57ab2c00f7657cfef",
"text": "We developed a new method for estimation of vigilance level by using both EEG and EMG signals recorded during transition from wakefulness to sleep. Previous studies used only EEG signals for estimating the vigilance levels. In this study, it was aimed to estimate vigilance level by using both EEG and EMG signals for increasing the accuracy of the estimation rate. In our work, EEG and EMG signals were obtained from 30 subjects. In data preparation stage, EEG signals were separated to its subbands using wavelet transform for efficient discrimination, and chin EMG was used to verify and eliminate the movement artifacts. The changes in EEG and EMG were diagnosed while transition from wakefulness to sleep by using developed artificial neural network (ANN). Training and testing data sets consist of the subbanded components of EEG and power density of EMG signals were applied to the ANN for training and testing the system which gives three situations for the vigilance level of the subject: awake, drowsy, and sleep. The accuracy of estimation was about 98–99% while the accuracy of the previous study, which uses only EEG, was 95–96%.",
"title": ""
},
{
"docid": "497e7a0ed663b2c125650e05f81feae3",
"text": "In this paper we present a novel computer vision library called UAVision that provides support for different digital cameras technologies, from image acquisition to camera calibration, and all the necessary software for implementing an artificial vision system for the detection of color-coded objects. The algorithms behind the object detection focus on maintaining a low processing time, thus the library is suited for real-world real-time applications. The library also contains a TCP Communications Module, with broad interest in robotic applications where the robots are performing remotely from a basestation or from an user and there is the need to access the images acquired by the robot, both for processing or debug purposes. Practical results from the implementation of the same software pipeline using different cameras as part of different types of vision systems are presented. The vision system software pipeline that we present is designed to cope with application dependent time constraints. The experimental results show that using the UAVision library it is possible to use digital cameras at frame rates up to 50 frames per second when working with images of size up to 1 megapixel. Moreover, we present experimental results to show the effect of the frame rate in the delay between the perception of the world and the action of an autonomous robot, as well as the use of raw data from the camera sensor and the implications of this in terms of the referred delay.",
"title": ""
}
] | scidocsrr |
5b57fc4f9326af53596dbb0c6e09bc5e | Binary Shapelet Transform for Multiclass Time Series Classification | [
{
"docid": "88be12fdd7ec90a7af7337f3d29b2130",
"text": "Classification of time series has been attracting great interest over the past decade. While dozens of techniques have been introduced, recent empirical evidence has strongly suggested that the simple nearest neighbor algorithm is very difficult to beat for most time series problems, especially for large-scale datasets. While this may be considered good news, given the simplicity of implementing the nearest neighbor algorithm, there are some negative consequences of this. First, the nearest neighbor algorithm requires storing and searching the entire dataset, resulting in a high time and space complexity that limits its applicability, especially on resource-limited sensors. Second, beyond mere classification accuracy, we often wish to gain some insight into the data and to make the classification result more explainable, which global characteristics of the nearest neighbor cannot provide. In this work we introduce a new time series primitive, time series shapelets, which addresses these limitations. Informally, shapelets are time series subsequences which are in some sense maximally representative of a class. We can use the distance to the shapelet, rather than the distance to the nearest neighbor to classify objects. As we shall show with extensive empirical evaluations in diverse domains, classification algorithms based on the time series shapelet primitives can be interpretable, more accurate, and significantly faster than state-of-the-art classifiers.",
"title": ""
},
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
},
{
"docid": "8609f49cc78acc1ba25e83c8e68040a6",
"text": "Time series shapelets are small, local patterns in a time series that are highly predictive of a class and are thus very useful features for building classifiers and for certain visualization and summarization tasks. While shapelets were introduced only recently, they have already seen significant adoption and extension in the community. Despite their immense potential as a data mining primitive, there are two important limitations of shapelets. First, their expressiveness is limited to simple binary presence/absence questions. Second, even though shapelets are computed offline, the time taken to compute them is significant. In this work, we address the latter problem by introducing a novel algorithm that finds shapelets in less time than current methods by an order of magnitude. Our algorithm is based on intelligent caching and reuse of computations, and the admissible pruning of the search space. Because our algorithm is so fast, it creates an opportunity to consider more expressive shapelet queries. In particular, we show for the first time an augmented shapelet representation that distinguishes the data based on conjunctions or disjunctions of shapelets. We call our novel representation Logical-Shapelets. We demonstrate the efficiency of our approach on the classic benchmark datasets used for these problems, and show several case studies where logical shapelets significantly outperform the original shapelet representation and other time series classification techniques. We demonstrate the utility of our ideas in domains as diverse as gesture recognition, robotics, and biometrics.",
"title": ""
}
] | [
{
"docid": "df4fbaf83a761235c5d77654973b5eb1",
"text": "We add to the discussion of how to assess the creativity of programs which generate artefacts such as poems, theorems, paintings, melodies, etc. To do so, we first review some existing frameworks for assessing artefact generation programs. Then, drawing on our experience of building both a mathematical discovery system and an automated painter, we argue that it is not appropriate to base the assessment of a system on its output alone, and that the way it produces artefacts also needs to be taken into account. We suggest a simple framework within which the behaviour of a program can be categorised and described which may add to the perception of creativity in the system.",
"title": ""
},
{
"docid": "cb011c7e0d4d5f6d05e28c07ff02e18b",
"text": "The legendary wealth in gold of ancient Egypt seems to correspond with an unexpected high number of gold production sites in the Eastern Desert of Egypt and Nubia. This contribution introduces briefly the general geology of these vast regions and discusses the geology of the different varieties of the primary gold occurrences (always related to auriferous quartz mineralization in veins or shear zones) as well as the variable physico-chemical genesis of the gold concentrations. The development of gold mining over time, from Predynastic (ca. 3000 BC) until the end of Arab gold production times (about 1350 AD), including the spectacular Pharaonic periods is outlined, with examples of its remaining artefacts, settlements and mining sites in remote regions of the Eastern Desert of Egypt and Nubia. Finally, some estimates on the scale of gold production are presented. 2002 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "af572a43542fde321e18675213f635ae",
"text": "The representation of 3D pose plays a critical role for 3D action and gesture recognition. Rather than representing a 3D pose directly by its joint locations, in this paper, we propose a Deformable Pose Traversal Convolution Network that applies one-dimensional convolution to traverse the 3D pose for its representation. Instead of fixing the receptive field when performing traversal convolution, it optimizes the convolution kernel for each joint, by considering contextual joints with various weights. This deformable convolution better utilizes the contextual joints for action and gesture recognition and is more robust to noisy joints. Moreover, by feeding the learned pose feature to a LSTM, we perform end-to-end training that jointly optimizes 3D pose representation and temporal sequence recognition. Experiments on three benchmark datasets validate the competitive performance of our proposed method, as well as its efficiency and robustness to handle noisy joints of pose.",
"title": ""
},
{
"docid": "c8d33f21915a6f1403f046ffa17b6e2e",
"text": "Synthetic aperture radar (SAR) image segmentation is a difficult problem due to the presence of strong multiplicative noise. To attain multi-region segmentation for SAR images, this paper presents a parametric segmentation method based on the multi-texture model with level sets. Segmentation is achieved by solving level set functions obtained from minimizing the proposed energy functional. To fully utilize image information, edge feature and region information are both included in the energy functional. For the need of level set evolution, the ratio of exponentially weighted averages operator is modified to obtain edge feature. Region information is obtained by the improved edgeworth series expansion, which can adaptively model a SAR image distribution with respect to various kinds of regions. The performance of the proposed method is verified by three high resolution SAR images. The experimental results demonstrate that SAR images can be segmented into multiple regions accurately without any speckle pre-processing steps by the proposed method.",
"title": ""
},
{
"docid": "8fa6defe08908c6ee6527d2e3a322a12",
"text": "A new wide-band high-efficiency coplanar waveguide-fed printed loop antenna is presented for wireless communication systems in this paper. By adjusting geometrical parameters, the proposed antenna can easily achieve a wide bandwidth. To optimize the antenna performances, a parametric study was conducted with the aid of a commercial software, and based on the optimized geometry, a prototype was designed, fabricated, and tested. The simulated and measured results confirmed that the proposed antenna can operate at (1.68-2.68 GHz) band and at (1.46-2.6 GHz) band with bandwidth of 1 and 1.14 GHz, respectively. Moreover, the antenna has a nearly omnidirectional radiation pattern with a reasonable gain and high efficiency. Due to the above characteristics, the proposed antenna is very suitable for applications in PCS and IMT2000 systems.",
"title": ""
},
{
"docid": "b1313b777c940445eb540b1e12fa559e",
"text": "In this paper we explore the correlation between the sound of words and their meaning, by testing if the polarity (‘good guy’ or ‘bad guy’) of a character’s role in a work of fiction can be predicted by the name of the character in the absence of any other context. Our approach is based on phonological and other features proposed in prior theoretical studies of fictional names. These features are used to construct a predictive model over a manually annotated corpus of characters from motion pictures. By experimenting with different mixtures of features, we identify phonological features as being the most discriminative by comparison to social and other types of features, and we delve into a discussion of specific phonological and phonotactic indicators of a character’s role’s polarity.",
"title": ""
},
{
"docid": "7f5ff39232cd491e648d40b070e0709c",
"text": "Synthesizing terrain or adding detail to terrains manually is a long and tedious process. With procedural synthesis methods this process is faster but more difficult to control. This paper presents a new technique of terrain synthesis that uses an existing terrain to synthesize new terrain. To do this we use multi-resolution analysis to extract the high-resolution details from existing models and apply them to increase the resolution of terrain. Our synthesized terrains are more heterogeneous than procedural results, are superior to terrains created by texture transfer, and retain the large-scale characteristics of the original terrain.",
"title": ""
},
{
"docid": "23ffdf5e7797e7f01c6d57f1e5546026",
"text": "Classroom experiments that evaluate the effectiveness of educational technologies do not typically examine the effects of classroom contextual variables (e.g., out-of-software help-giving and external distractions). Yet these variables may influence students' instructional outcomes. In this paper, we introduce the Spatial Classroom Log Explorer (SPACLE): a prototype tool that facilitates the rapid discovery of relationships between within-software and out-of-software events. Unlike previous tools for retrospective analysis, SPACLE replays moment-by-moment analytics about student and teacher behaviors in their original spatial context. We present a data analysis workflow using SPACLE and demonstrate how this workflow can support causal discovery. We share the results of our initial replay analyses using SPACLE, which highlight the importance of considering spatial factors in the classroom when analyzing ITS log data. We also present the results of an investigation into the effects of student-teacher interactions on student learning in K-12 blended classrooms, using our workflow, which combines replay analysis with SPACLE and causal modeling. Our findings suggest that students' awareness of being monitored by their teachers may promote learning, and that \"gaming the system\" behaviors may extend outside of educational software use.",
"title": ""
},
{
"docid": "0eb28373a693593d6ca7c1bef34b3bde",
"text": "Software development life cycle or SDLC for short is a methodology for designing, building, and maintaining information and industrial systems. So far, there exist many SDLC models, one of which is the Waterfall model which comprises five phases to be completed sequentially in order to develop a software solution. However, SDLC of software systems has always encountered problems and limitations that resulted in significant budget overruns, late or suspended deliveries, and dissatisfied clients. The major reason for these deficiencies is that project directors are not wisely assigning the required number of workers and resources on the various activities of the SDLC. Consequently, some SDLC phases with insufficient resources may be delayed; while, others with excess resources may be idled, leading to a bottleneck between the arrival and delivery of projects and to a failure in delivering an operational product on time and within budget. This paper proposes a simulation model for the Waterfall development process using the Simphony.NET simulation tool whose role is to assist project managers in determining how to achieve the maximum productivity with the minimum number of expenses, workers, and hours. It helps maximizing the utilization of development processes by keeping all employees and resources busy all the time to keep pace with the arrival of projects and to decrease waste and idle time. As future work, other SDLC models such as spiral and incremental are to be simulated, giving project executives the choice to use a diversity of software development methodologies.",
"title": ""
},
{
"docid": "d06dc916942498014f9d00498c1d1d1f",
"text": "In this paper we propose a state space modeling approach for trust evaluation in wireless sensor networks. In our state space trust model (SSTM), each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, we translate the trust evaluation problem to be a nonlinear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter, and thus the resulting algorithm is termed as iterative particle filter (IPF). The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis. Index Terms state space trust model, wireless sensor network, trust evaluation, particle filter, high dimensional. ✦",
"title": ""
},
{
"docid": "4ce8934f295235acc2bbf03c7530842b",
"text": "— Speech recognition has found its application on various aspects of our daily lives from automatic phone answering service to dictating text and issuing voice commands to computers. In this paper, we present the historical background and technological advances in speech recognition technology over the past few decades. More importantly, we present the steps involved in the design of a speaker-independent speech recognition system. We focus mainly on the pre-processing stage that extracts salient features of a speech signal and a technique called Dynamic Time Warping commonly used to compare the feature vectors of speech signals. These techniques are applied for recognition of isolated as well as connected words spoken. We conduct experiments on MATLAB to verify these techniques. Finally, we design a simple 'Voice-to-Text' converter application using MATLAB.",
"title": ""
},
{
"docid": "8ca0edf4c51b0156c279fcbcb1941d2b",
"text": "The good fossil record of trilobite exoskeletal anatomy and ontogeny, coupled with information on their nonbiomineralized tissues, permits analysis of how the trilobite body was organized and developed, and the various evolutionary modifications of such patterning within the group. In several respects trilobite development and form appears comparable with that which may have characterized the ancestor of most or all euarthropods, giving studies of trilobite body organization special relevance in the light of recent advances in the understanding of arthropod evolution and development. The Cambrian diversification of trilobites displayed modifications in the patterning of the trunk region comparable with those seen among the closest relatives of Trilobita. In contrast, the Ordovician diversification of trilobites, although contributing greatly to the overall diversity within the clade, did so within a narrower range of trunk conditions. Trilobite evolution is consistent with an increased premium on effective enrollment and protective strategies, and with an evolutionary trade-off between the flexibility to vary the number of trunk segments and the ability to regionalize portions of the trunk. 401 A nn u. R ev . E ar th P la ne t. Sc i. 20 07 .3 5: 40 143 4. D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by U N IV E R SI T Y O F C A L IF O R N IA R IV E R SI D E L IB R A R Y o n 05 /0 2/ 07 . F or p er so na l u se o nl y. ANRV309-EA35-14 ARI 20 March 2007 15:54 Cephalon: the anteriormost or head division of the trilobite body composed of a set of conjoined segments whose identity is expressed axially Thorax: the central portion of the trilobite body containing freely articulating trunk segments Pygidium: the posterior tergite of the trilobite exoskeleton containing conjoined segments INTRODUCTION The rich record of the diversity and development of the trilobite exoskeleton (along with information on the geological occurrence, nonbiomineralized tissues, and associated trace fossils of trilobites) provides the best history of any Paleozoic arthropod group. The retention of features that may have characterized the most recent common ancestor of all living arthropods, which have been lost or obscured in most living forms, provides insights into the nature of the evolutionary radiation of the most diverse metazoan phylum alive today. Studies of phylogenetic stem-group taxa, of which Trilobita provide a prominent example, have special significance in the light of renewed interest in arthropod evolution prompted by comparative developmental genetics. Although we cannot hope to dissect the molecular controls operative within trilobites, the evolutionary developmental biology (evo-devo) approach permits a fresh perspective from which to examine the contributions that paleontology can make to evolutionary biology, which, in the context of the overall evolutionary history of Trilobita, is the subject of this review. TRILOBITES: BODY PLAN AND ONTOGENY Trilobites were a group of marine arthropods that appeared in the fossil record during the early Cambrian approximately 520 Ma and have not been reported from rocks younger than the close of the Permian, approximately 250 Ma. Roughly 15,000 species have been described to date, and although analysis of the occurrence of trilobite genera suggests that the known record is quite complete (Foote & Sepkoski 1999), many new species and genera continue to be established each year. The known diversity of trilobites results from their strongly biomineralized exoskeletons, made of two layers of low magnesium calcite, which was markedly more durable than the sclerites of most other arthropods. Because the exoskeleton was rich in morphological characters and was the only body structure preserved in the vast majority of specimens, skeletal form has figured prominently in the biological interpretation of trilobites.",
"title": ""
},
{
"docid": "6e16d3e2fba39a5bf1d0fe234310405f",
"text": "In cloud gaming the game is rendered on a distant cloud server and the resulting video stream is sent back to the user who controls the game via a thin client. The high resource usage of cloud gaming servers is a challenge. Expensive hardware including GPUs have to be efficiently shared among multiple simultaneous users. The cloud servers use virtualization techniques to isolate users and share resources among dedicated servers. The traditional virtualization techniques can however inflict notable performance overhead limiting the user count for a single server. Operating-system-level virtualization instances known as containers are an emerging trend in cloud computing. Containers don't need to virtualize the entire operating system still providing most of the benefits of virtualization. In this paper, we evaluate the container-based alternative to traditional virtualization in cloud gaming systems through extensive experiments. We also discuss the differences needed in system implementation using the container approach and identify the existing limitations.",
"title": ""
},
{
"docid": "9bbf2a9f5afeaaa0f6ca12e86aef8e88",
"text": "Phishing is a model problem for illustrating usability concerns of privacy and security because both system designers and attackers battle using user interfaces to guide (or misguide) users.We propose a new scheme, Dynamic Security Skins, that allows a remote web server to prove its identity in a way that is easy for a human user to verify and hard for an attacker to spoof. We describe the design of an extension to the Mozilla Firefox browser that implements this scheme.We present two novel interaction techniques to prevent spoofing. First, our browser extension provides a trusted window in the browser dedicated to username and password entry. We use a photographic image to create a trusted path between the user and this window to prevent spoofing of the window and of the text entry fields.Second, our scheme allows the remote server to generate a unique abstract image for each user and each transaction. This image creates a \"skin\" that automatically customizes the browser window or the user interface elements in the content of a remote web page. Our extension allows the user's browser to independently compute the image that it expects to receive from the server. To authenticate content from the server, the user can visually verify that the images match.We contrast our work with existing anti-phishing proposals. In contrast to other proposals, our scheme places a very low burden on the user in terms of effort, memory and time. To authenticate himself, the user has to recognize only one image and remember one low entropy password, no matter how many servers he wishes to interact with. To authenticate content from an authenticated server, the user only needs to perform one visual matching operation to compare two images. Furthermore, it places a high burden of effort on an attacker to spoof customized security indicators.",
"title": ""
},
{
"docid": "729b29b5ab44102541f3ebf8d24efec3",
"text": "In the cognitive neuroscience literature on the distinction between categorical and coordinate spatial relations, it has often been observed that categorical spatial relations are referred to linguistically by words like English prepositions, many of which specify binary oppositions-e.g., above/below, left/right, on/off, in/out. However, the actual semantic content of English prepositions, and of comparable word classes in other languages, has not been carefully considered. This paper has three aims. The first and most important aim is to inform cognitive neuroscientists interested in spatial representation about relevant research on the kinds of categorical spatial relations that are encoded in the 6000+ languages of the world. Emphasis is placed on cross-linguistic similarities and differences involving deictic relations, topological relations, and projective relations, the last of which are organized around three distinct frames of reference--intrinsic, relative, and absolute. The second aim is to review what is currently known about the neuroanatomical correlates of linguistically encoded categorical spatial relations, with special focus on the left supramarginal and angular gyri, and to suggest ways in which cross-linguistic data can help guide future research in this area of inquiry. The third aim is to explore the interface between language and other mental systems, specifically by summarizing studies which suggest that although linguistic and perceptual/cognitive representations of space are at least partially distinct, language nevertheless has the power to bring about not only modifications of perceptual sensitivities but also adjustments of cognitive styles.",
"title": ""
},
{
"docid": "6eda76a015e8cb9122ed89b491474248",
"text": "Beauty treatment for skin requires a high-intensity focused ultrasound (HIFU) transducer to generate coagulative necrosis in a small focal volume (e.g., 1 mm³) placed at a shallow depth (3-4.5 mm from the skin surface). For this, it is desirable to make the F-number as small as possible under the largest possible aperture in order to generate ultrasound energy high enough to induce tissue coagulation in such a small focal volume. However, satisfying both conditions at the same time is demanding. To meet the requirements, this paper, therefore, proposes a double-focusing technique, in which the aperture of an ultrasound transducer is spherically shaped for initial focusing and an acoustic lens is used to finally focus ultrasound on a target depth of treatment; it is possible to achieve the F-number of unity or less while keeping the aperture of a transducer as large as possible. In accordance with the proposed method, we designed and fabricated a 7-MHz double-focused ultrasound transducer. The experimental results demonstrated that the fabricated double-focused transducer had a focal length of 10.2 mm reduced from an initial focal length of 15.2 mm and, thus, the F-number changed from 1.52 to 1.02. Based on the results, we concluded that the proposed double-focusing method is suitable to decrease F-number while maintaining a large aperture size.",
"title": ""
},
{
"docid": "d1c69dac07439ade32a962134753ab08",
"text": "The change history of a software project contains a rich collection of code changes that record previous development experience. Changes that fix bugs are especially interesting, since they record both the old buggy code and the new fixed code. This paper presents a bug finding algorithm using bug fix memories: a project-specific bug and fix knowledge base developed by analyzing the history of bug fixes. A bug finding tool, BugMem, implements the algorithm. The approach is different from bug finding tools based on theorem proving or static model checking such as Bandera, ESC/Java, FindBugs, JLint, and PMD. Since these tools use pre-defined common bug patterns to find bugs, they do not aim to identify project-specific bugs. Bug fix memories use a learning process, so the bug patterns are project-specific, and project-specific bugs can be detected. The algorithm and tool are assessed by evaluating if real bugs and fixes in project histories can be found in the bug fix memories. Analysis of five open source projects shows that, for these projects, 19.3%-40.3% of bugs appear repeatedly in the memories, and 7.9%-15.5% of bug and fix pairs are found in memories. The results demonstrate that project-specific bug fix patterns occur frequently enough to be useful as a bug detection technique. Furthermore, for the bug and fix pairs, it is possible to both detect the bug and provide a strong suggestion for the fix. However, there is also a high false positive rate, with 20.8%-32.5% of non-bug containing changes also having patterns found in the memories. A comparison of BugMem with a bug finding tool, PMD, shows that the bug sets identified by both tools are mostly exclusive, indicating that BugMem complements other bug finding tools.",
"title": ""
},
{
"docid": "df5aaa0492fc07b76eb7f8da97ebf08e",
"text": "The aim of the present case report is to describe the orthodontic-surgical treatment of a 17-year-and-9-month-old female patient with a Class III malocclusion, poor facial esthetics, and mandibular and chin protrusion. She had significant anteroposterior and transverse discrepancies, a concave profile, and strained lip closure. Intraorally, she had a negative overjet of 5 mm and an overbite of 5 mm. The treatment objectives were to correct the malocclusion, and facial esthetic and also return the correct function. The surgical procedures included a Le Fort I osteotomy for expansion, advancement, impaction, and rotation of the maxilla to correct the occlusal plane inclination. There was 2 mm of impaction of the anterior portion of the maxilla and 5 mm of extrusion in the posterior region. A bilateral sagittal split osteotomy was performed in order to allow counterclockwise rotation of the mandible and anterior projection of the chin, accompanying the maxillary occlusal plane. Rigid internal fixation was used without any intermaxillary fixation. It was concluded that these procedures were very effective in producing a pleasing facial esthetic result, showing stability 7 years posttreatment.",
"title": ""
},
{
"docid": "6ae5f96cd14df30e7ac5cc6b654823df",
"text": "A succession of doctrines for enhancing cybersecurity has been advocated in the past, including prevention, risk management, and deterrence through accountability. None has proved effective. Proposals that are now being made view cybersecurity as a public good and adopt mechanisms inspired by those used for public health. This essay discusses the failings of previous doctrines and surveys the landscape of cybersecurity through the lens that a new doctrine, public cybersecurity, provides.",
"title": ""
},
{
"docid": "867b4cb932ad3ec3ec69cdc831d81cc8",
"text": "This paper reviews the some of significant works on infant cry signal analysis proposed in the past two decades and reviews the recent progress in this field. The cry of baby cannot be predicted accurately where it is very hard to identify for what it cries for. Experienced parents and specialists in the area of child care such as pediatrician and pediatric nurse can distinguish different sort of cries by just making use their individual perception on auditory sense. This is totally subjective evaluation and not suitable for clinical use. Non-invasive method has been widely used in infant cry signal analysis and has shown very promising results. Various feature extraction and classification algorithms used in infant cry analysis are briefly described. This review gives an insight on the current state of the art works in infant cry signal analysis and concludes with thoughts about the future directions for better representation and interpretation of infant cry signals.",
"title": ""
}
] | scidocsrr |
4c3020ee8f4bcf2fbafb71a0f0a880be | Principled Uncertainty Estimation for Deep Neural Networks | [
{
"docid": "c5efe5fe7c945e48f272496e7c92bb9c",
"text": "Knowing when a classifier’s prediction can be trusted is useful in many applications and critical for safely using AI. While the bulk of the effort in machine learning research has been towards improving classifier performance, understanding when a classifier’s predictions should and should not be trusted has received far less attention. The standard approach is to use the classifier’s discriminant or confidence score; however, we show there exists an alternative that is more effective in many situations. We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier’s confidence score as well as many other baselines. Further, under some mild distributional assumptions, we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.",
"title": ""
},
{
"docid": "142b1f178ade5b7ff554eae9cad27f69",
"text": "It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function—ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g. , artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier—the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset.",
"title": ""
},
{
"docid": "3cdab5427efd08edc4f73266b7ed9176",
"text": "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.",
"title": ""
}
] | [
{
"docid": "36e531c34dd8f714f481c6ab9ed1a375",
"text": "Generating informative responses in end-toend neural dialogue systems attracts a lot of attention in recent years. Various previous work leverages external knowledge and the dialogue contexts to generate such responses. Nevertheless, few has demonstrated their capability on incorporating the appropriate knowledge in response generation. Motivated by this, we propose a novel open-domain conversation generation model in this paper, which employs the posterior knowledge distribution to guide knowledge selection, therefore generating more appropriate and informative responses in conversations. To the best of our knowledge, we are the first one who utilize the posterior knowledge distribution to facilitate conversation generation. Our experiments on both automatic and human evaluation clearly verify the superior performance of our model over the state-of-the-art baselines.",
"title": ""
},
{
"docid": "6b8942948b3f23971254ba7b90dac6f0",
"text": "An important preprocess in computer-aided orthodontics is to segment teeth from the dental models accurately, which should involve manual interactions as few as possible. But fully automatic partition of all teeth is not a trivial task, since teeth occur in different shapes and their arrangements vary substantially from one individual to another. The difficulty is exacerbated when severe teeth malocclusion and crowding problems occur, which is a common occurrence in clinical cases. Most published methods in this area either are inaccurate or require lots of manual interactions. Motivated by the state-of-the-art general mesh segmentation methods that adopted the theory of harmonic field to detect partition boundaries, this paper proposes a novel, dental-targeted segmentation framework for dental meshes. With a specially designed weighting scheme and a strategy of a priori knowledge to guide the assignment of harmonic constraints, this method can identify teeth partition boundaries effectively. Extensive experiments and quantitative analysis demonstrate that the proposed method is able to partition high-quality teeth automatically with robustness and efficiency.",
"title": ""
},
{
"docid": "61940aa7a2454ca43612b7657733c9f5",
"text": "One of the sources of difficulty in speech recognition and understanding is the variability due to alternate pronunciations of words. To address the issue we have investigated the use of multiple-pronunciation models (MPMs) in the decoding stage of a speaker-independent speech understanding system. In this paper we address three important issues regarding MPMs: (a) Model construction: How can MPMs be built from available data without human intervention? (b) Model embedding: How should MPM construction interact with the training of the sub-word unit models on which they are based? (c) Utility: Do they help in speech recognition? Automatic, data-driven MPM construction is accomplished using a structural HMM induction algorithm. The resulting MPMs are trained jointlywith a multi-layer perceptron functioningas a phonetic likelihood estimator. The experiments reported here demonstrate that MPMs can significantly improve speech recognition results over standard single pronunciation models.",
"title": ""
},
{
"docid": "2bb0b89491015f124e4b244954508234",
"text": "In recent years, deep neural networks have achieved significant success in Chinese word segmentation and many other natural language processing tasks. Most of these algorithms are end-to-end trainable systems and can effectively process and learn from large scale labeled datasets. However, these methods typically lack the capability of processing rare words and data whose domains are different from training data. Previous statistical methods have demonstrated that human knowledge can provide valuable information for handling rare cases and domain shifting problems. In this paper, we seek to address the problem of incorporating dictionaries into neural networks for the Chinese word segmentation task. Two different methods that extend the bi-directional long short-term memory neural network are proposed to perform the task. To evaluate the performance of the proposed methods, state-of-the-art supervised models based methods and domain adaptation approaches are compared with our methods on nine datasets from different domains. The experimental results demonstrate that the proposed methods can achieve better performance than other state-of-the-art neural network methods and domain adaptation approaches in most cases.",
"title": ""
},
{
"docid": "9423dcfc04f57be48adddc88e40f1963",
"text": "Presynaptic Ca(V)2.2 (N-type) calcium channels are subject to modulation by interaction with syntaxin 1 and by a syntaxin 1-sensitive Galpha(O) G-protein pathway. We used biochemical analysis of neuronal tissue lysates and a new quantitative test of colocalization by intensity correlation analysis at the giant calyx-type presynaptic terminal of the chick ciliary ganglion to explore the association of Ca(V)2.2 with syntaxin 1 and Galpha(O). Ca(V)2.2 could be localized by immunocytochemistry (antibody Ab571) in puncta on the release site aspect of the presynaptic terminal and close to synaptic vesicle clouds. Syntaxin 1 coimmunoprecipitated with Ca(V)2.2 from chick brain and chick ciliary ganglia and was widely distributed on the presynaptic terminal membrane. A fraction of the total syntaxin 1 colocalized with the Ca(V)2.2 puncta, whereas the bulk colocalized with MUNC18-1. Galpha(O,) whether in its trimeric or monomeric state, did not coimmunoprecipitate with Ca(V)2.2, MUNC18-1, or syntaxin 1. However, the G-protein exhibited a punctate staining on the calyx membrane with an intensity that varied in synchrony with that for both Ca channels and syntaxin 1 but only weakly with MUNC18-1. Thus, syntaxin 1 appears to be a component of two separate complexes at the presynaptic terminal, a minor one at the transmitter release site with Ca(V)2.2 and Galpha(O), as well as in large clusters remote from the release site with MUNC18-1. These syntaxin 1 protein complexes may play distinct roles in presynaptic biology.",
"title": ""
},
{
"docid": "1d3192e66e042e67dabeae96ca345def",
"text": "Privacy-enhancing technologies (PETs), which constitute a wide array of technical means for protecting users’ privacy, have gained considerable momentum in both academia and industry. However, existing surveys of PETs fail to delineate what sorts of privacy the described technologies enhance, which makes it difficult to differentiate between the various PETs. Moreover, those surveys could not consider very recent important developments with regard to PET solutions. The goal of this chapter is two-fold. First, we provide an analytical framework to differentiate various PETs. This analytical framework consists of high-level privacy principles and concrete privacy concerns. Secondly, we use this framework to evaluate representative up-to-date PETs, specifically with regard to the privacy concerns they address, and how they address them (i.e., what privacy principles they follow). Based on findings of the evaluation, we outline several future research directions.",
"title": ""
},
{
"docid": "b2e5a2395641c004bdc84964d2528b13",
"text": "We propose a novel probabilistic model for visual question answering (Visual QA). The key idea is to infer two sets of embeddings: one for the image and the question jointly and the other for the answers. The learning objective is to learn the best parameterization of those embeddings such that the correct answer has higher likelihood among all possible answers. In contrast to several existing approaches of treating Visual QA as multi-way classification, the proposed approach takes the semantic relationships (as characterized by the embeddings) among answers into consideration, instead of viewing them as independent ordinal numbers. Thus, the learned embedded function can be used to embed unseen answers (in the training dataset). These properties make the approach particularly appealing for transfer learning for open-ended Visual QA, where the source dataset on which the model is learned has limited overlapping with the target dataset in the space of answers. We have also developed large-scale optimization techniques for applying the model to datasets with a large number of answers, where the challenge is to properly normalize the proposed probabilistic models. We validate our approach on several Visual QA datasets and investigate its utility for transferring models across datasets. The empirical results have shown that the approach performs well not only on in-domain learning but also on transfer learning.",
"title": ""
},
{
"docid": "bb29a8e942c69cdb6634faa563cddb3a",
"text": "Convolutional neural network (CNN) finds applications in a variety of computer vision applications ranging from object recognition and detection to scene understanding owing to its exceptional accuracy. There exist different algorithms for CNNs computation. In this paper, we explore conventional convolution algorithm with a faster algorithm using Winograd's minimal filtering theory for efficient FPGA implementation. Distinct from the conventional convolution algorithm, Winograd algorithm uses less computing resources but puts more pressure on the memory bandwidth. We first propose a fusion architecture that can fuse multiple layers naturally in CNNs, reusing the intermediate data. Based on this fusion architecture, we explore heterogeneous algorithms to maximize the throughput of a CNN. We design an optimal algorithm to determine the fusion and algorithm strategy for each layer. We also develop an automated toolchain to ease the mapping from Caffe model to FPGA bitstream using Vivado HLS. Experiments using widely used VGG and AlexNet demonstrate that our design achieves up to 1.99X performance speedup compared to the prior fusion-based FPGA accelerator for CNNs.",
"title": ""
},
{
"docid": "3cf4ef33356720e55748c7f14383830d",
"text": "Article history: Received 7 September 2015 Received in revised form 15 February 2016 Accepted 27 March 2016 Available online 14 April 2016 For many organizations, managing both economic and environmental performance has emerged as a key challenge. Further,with expanding globalization organizations are finding itmore difficult tomaintain adequate supplier relations to balance both economic and environmental performance initiatives. Drawing on transaction cost economics, this study examines how novel information technology like cloud computing can help firms not only maintain adequate supply chain collaboration, but also balance both economic and environmental performance. We analyze survey data from 247 IT and supply chain professionals using structural equation modeling and partial least squares to verify the robustness of our results. Our analyses yield several interesting findings. First, contrary to other studies we find that collaboration does not necessarily affect environmental performance and only partiallymediates the relationship between cloud computing and economic performance. Secondly, the results of our survey provide evidence of the direct effect of cloud computing on both economic and environmental performance. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "13f7b5a92e830bff44c14c77056f9743",
"text": "Many pneumatic energy sources are available for use in autonomous and wearable soft robotics, but it is often not obvious which options are most desirable or even how to compare them. To address this, we compare pneumatic energy sources and review their relative merits. We evaluate commercially available battery-based microcompressors (singly, in parallel, and in series) and cylinders of high-pressure fluid (air and carbon dioxide). We identify energy density (joules/gram) and flow capacity (liters/gram) normalized by the mass of the entire fuel system (versus net fuel mass) as key metrics for soft robotic power systems. We also review research projects using combustion (methane and butane) and monopropellant decomposition (hydrogen peroxide), citing theoretical and experimental values. Comparison factors including heat, effective energy density, and working pressure/flow rate are covered. We conclude by comparing the key metrics behind each technology. Battery-powered microcompressors provide relatively high capacity, but maximum pressure and flow rates are low. Cylinders of compressed fluid provide high pressures and flow rates, but their limited capacity leads to short operating times. While methane and butane possess the highest net fuel energy densities, they typically react at speeds and pressures too high for many soft robots and require extensive system-level development. Hydrogen peroxide decomposition requires not only few additional parts (no pump or ignition system) but also considerable system-level development. We anticipate that this study will provide a framework for configuring fuel systems in soft robotics.",
"title": ""
},
{
"docid": "53e8333b3e4e9874449492852d948ea2",
"text": "In recent deep online and near-online multi-object tracking approaches, a difficulty has been to incorporate long-term appearance models to efficiently score object tracks under severe occlusion and multiple missing detections. In this paper, we propose a novel recurrent network model, the Bilinear LSTM, in order to improve the learning of long-term appearance models via a recurrent network. Based on intuitions drawn from recursive least squares, Bilinear LSTM stores building blocks of a linear predictor in its memory, which is then coupled with the input in a multiplicative manner, instead of the additive coupling in conventional LSTM approaches. Such coupling resembles an online learned classifier/regressor at each time step, which we have found to improve performances in using LSTM for appearance modeling. We also propose novel data augmentation approaches to efficiently train recurrent models that score object tracks on both appearance and motion. We train an LSTM that can score object tracks based on both appearance and motion and utilize it in a multiple hypothesis tracking framework. In experiments, we show that with our novel LSTM model, we achieved state-of-the-art performance on near-online multiple object tracking on the MOT 2016 and MOT 2017 benchmarks.",
"title": ""
},
{
"docid": "aac3060a199b016e38be800c213c9dba",
"text": "In this paper, we investigate the use of electroencephalograhic signals for the purpose of recognizing unspoken speech. The term unspoken speech refers to the process in which a subject imagines speaking a given word without moving any articulatory muscle or producing any audible sound. Early work by Wester (Wester, 2006) presented results which were initially interpreted to be related to brain activity patterns due to the imagination of pronouncing words. However, subsequent investigations lead to the hypothesis that the good recognition performance might instead have resulted from temporal correlated artifacts in the brainwaves since the words were presented in blocks. In order to further investigate this hypothesis, we run a study with 21 subjects, recording 16 EEG channels using a 128 cap montage. The vocabulary consists of 5 words, each of which is repeated 20 times during a recording session in order to train our HMM-based classifier. The words are presented in blockwise, sequential, and random order. We show that the block mode yields an average recognition rate of 45.50%, but it drops to chance level for all other modes. Our experiments suggest that temporal correlated artifacts were recognized instead of words in block recordings and back the above-mentioned hypothesis.",
"title": ""
},
{
"docid": "6aa38687ebed443ea0068547d24acb6d",
"text": "In this paper, a digital signal processing (DSP) software development process is described. It starts from the conceptual algorithm design and computer simulation using MATLAB, Simulink, or floating-point C programs. The finite-word-length analysis using MATLAB fixed-point functions or Simulink follows with fixed-point blockset. After verification of the algorithm, a fixed-point C program is developed for a specific fixed-point DSP processor. Software efficiency can be further improved by using mixed C-and-assembly programs, intrinsic functions, and optimized assembly routines in DSP libraries. This integrated software-development process enables students and engineers to understand and appreciate the important differences between floating-point simulations and fixed-point implementation considerations and applications.",
"title": ""
},
{
"docid": "1ec395dbe807ff883dab413419ceef56",
"text": "\"The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure\" provides a new guideline for hypertension prevention and management. The following are the key messages(1) In persons older than 50 years, systolic blood pressure (BP) of more than 140 mm Hg is a much more important cardiovascular disease (CVD) risk factor than diastolic BP; (2) The risk of CVD, beginning at 115/75 mm Hg, doubles with each increment of 20/10 mm Hg; individuals who are normotensive at 55 years of age have a 90% lifetime risk for developing hypertension; (3) Individuals with a systolic BP of 120 to 139 mm Hg or a diastolic BP of 80 to 89 mm Hg should be considered as prehypertensive and require health-promoting lifestyle modifications to prevent CVD; (4) Thiazide-type diuretics should be used in drug treatment for most patients with uncomplicated hypertension, either alone or combined with drugs from other classes. Certain high-risk conditions are compelling indications for the initial use of other antihypertensive drug classes (angiotensin-converting enzyme inhibitors, angiotensin-receptor blockers, beta-blockers, calcium channel blockers); (5) Most patients with hypertension will require 2 or more antihypertensive medications to achieve goal BP (<140/90 mm Hg, or <130/80 mm Hg for patients with diabetes or chronic kidney disease); (6) If BP is more than 20/10 mm Hg above goal BP, consideration should be given to initiating therapy with 2 agents, 1 of which usually should be a thiazide-type diuretic; and (7) The most effective therapy prescribed by the most careful clinician will control hypertension only if patients are motivated. Motivation improves when patients have positive experiences with and trust in the clinician. Empathy builds trust and is a potent motivator. Finally, in presenting these guidelines, the committee recognizes that the responsible physician's judgment remains paramount.",
"title": ""
},
{
"docid": "62b4eb1d0db3cf02d1412fe99690ac61",
"text": "In requirements engineering, there are several approaches for requirements modeling such as goal-oriented, aspect-driven, and system requirements modeling. In practice, companies often customize a given approach to their specific needs. Thus, we seek a solution that allows customization in a systematic way. In this paper, we propose a metamodel for requirements models (called core metamodel) and an approach for customizing this metamodel in order to support various requirements modeling approaches. The core metamodel represents the common concepts extracted from some prevalent approaches. We define the semantics of the concepts and the relations in the core metamodel. Based on this formalization, we can perform reasoning on requirements that may detect implicit relations and inconsistencies. Our approach for customization keeps the semantics of the core concepts intact and thus allows reuse of tools and reasoning over the customized metamodel. We illustrate the customization of our core metamodel with SysML concepts. As a case study, we apply the reasoning on requirements of an industrial mobile service application based on this customized core requirements metamodel.",
"title": ""
},
{
"docid": "59bfb330b9ca7460280fecca78383857",
"text": "Big data poses many facets and challenges when analyzing data, often described with the five big V’s of Volume, Variety, Velocity, Veracity, and Value. However, the most important V – Value can only be achieved when knowledge can be derived from the data. The volume of nowadays datasets make a manual investigation of all data records impossible and automated analysis techniques from data mining or machine learning often cannot be applied in a fully automated fashion to solve many real world analysis problems, and hence, need to be manually trained or adapted. Visual analytics aims to solve this problem with a “human-in-the-loop” approach that provides the analyst with a visual interface that tightly integrates automated analysis techniques with human interaction. However, a holistic understanding of these analytic processes is currently an under-explored research area. A major contribution of this dissertation is a conceptual model-driven approach to visual analytics that focuses on the human-machine interplay during knowledge generation. At its core, it presents the knowledge generation model which is subsequently specialized for human analytic behavior, visual interactive machine learning, and dimensionality reduction. These conceptual processes extend and combine existing conceptual works that aim to establish a theoretical foundation for visual analytics. In addition, this dissertation contributes novel methods to investigate and support human knowledge generation processes, such as semi-automation and recommendation, analytic behavior and trust building, or visual interaction with machine learning. These methods are investigated in close collaboration with real experts from different application domains (such as soccer analysis, linguistic intonation research, and criminal intelligence analysis) and hence, different data characteristics (geospatial movement, time series, and high-dimensional). The results demonstrate that this conceptual approach leads to novel, more tightly integrated, methods that support the analyst in knowledge generation. In a final broader discussion, this dissertation reflects the conceptual and methodological contributions and enumerates research areas at the intersection of data mining, machine learning, visualization, and human-computer interaction research, with the ultimate goal to make big data exploration more effective, efficient, and transparent.",
"title": ""
},
{
"docid": "5ddcfa43a488ee92dbf13f0a91310d5a",
"text": "We present in this chapter an overview of the Mumford and Shah model for image segmentation. We discuss its various formulations, some of its properties, the mathematical framework, and several approximations. We also present numerical algorithms and segmentation results using the Ambrosio–Tortorelli phase-field approximations on one hand, and using the level set formulations on the other hand. Several applications of the Mumford–Shah problem to image restoration are also presented. . Introduction: Description of theMumford and Shah Model An important problem in image analysis and computer vision is the segmentation one, that aims to partition a given image into its constituent objects, or to find boundaries of such objects. This chapter is devoted to the description, analysis, approximations, and applications of the classical Mumford and Shah functional proposed for image segmentation. In [–], David Mumford and Jayant Shah have formulated an energy minimization problem that allows to compute optimal piecewise-smooth or piecewise-constant approximations u of a given initial image g. Since then, their model has been analyzed and considered in depth by many authors, by studying properties of minimizers, approximations, and applications to image segmentation, image partition, image restoration, and more generally to image analysis and computer vision. We denote by Ω ⊂ Rd the image domain (an interval if d = , or a rectangle in the plane if d = ). More generally, we assume that Ω is open, bounded, and connected. Let g : Ω → R be a given gray-scale image (a signal in one dimension, a planar image in two dimensions, or a volumetric image in three dimensions). It is natural and without losing any generality to assume that g is a bounded function in Ω, g ∈ L(Ω). As formulated byMumford and Shah [], the segmentation problem in image analysis and computer vision consists in computing a decomposition Ω = Ω ∪Ω ∪ . . . ∪ Ωn ∪ K of the domain of the image g such that (a) The image g varies smoothly and/or slowly within each Ω i . (b) The image g varies discontinuously and/or rapidly across most of the boundary K between different Ω i . From the point of view of approximation theory, the segmentation problem may be restated as seeking ways to define and compute optimal approximations of a general function g(x) by piecewise-smooth functions u(x), i.e., functions u whose restrictions ui to the pieces Ω i of a decomposition of the domain Ω are continuous or differentiable. Mumford and ShahModel and its Applications to Image Segmentation and Image Restoration In what follows, Ω i will be disjoint connected open subsets of a domain Ω, each one with a piecewise-smooth boundary, and K will be a closed set, as the union of boundaries of Ω i inside Ω, thus Ω = Ω ∪Ω ∪ . . . ∪ Ωn ∪ K, K = Ω ∩ (∂Ω ∪ . . . ∪ ∂Ωn). The functional E to be minimized for image segmentation is defined by [–], E(u,K) = μ ∫ Ω (u − g)dx + ∫ Ω/K ∣∇u∣dx + ∣K∣, (.) where u : Ω → R is continuous or even differentiable inside each Ω i (or u ∈ H(Ω i)) and may be discontinuous across K. Here, ∣K∣ stands for the total surface measure of the hypersurface K (the counting measure if d = , the length measure if d = , the area measure if d = ). Later, we will define ∣K∣ byHd−(K), the d − dimensional Hausdorff measure in Rd . As explained by Mumford and Shah, dropping any of these three terms in (> .), inf E = : without the first, take u = , K = /; without the second, take u = g, K = /; without the third, take for example, in the discrete case K to be the boundary of all pixels of the image g, each Ω i be a pixel and u to be the average (value) of g over each pixel. The presence of all three terms leads to nontrivial solutions u, and an optimal pair (u,K) can be seen as a cartoon of the actual image g, providing a simplification of g. An important particular case is obtained when we restrict E to piecewise-constant functions u, i.e., u = constant ci on each open set Ω i . Multiplying E by μ−, we have μ−E(u,K) = ∑ i ∫ Ω i (g − ci)dx + ∣K∣, where = /μ. It is easy to verify that this is minimized in the variables ci by setting ci = meanΩ i (g) = ∫Ω i g(x)dx ∣Ω i ∣ , where ∣Ω i ∣ denotes here the Lebesgue measure of Ω i (e.g., area if d = , volume if d = ), so it is sufficient to minimize E(K) = ∑ i ∫ Ω i (g −meanΩ i g) dx + ∣K∣. It is possible to interpret E as the limit functional of E as μ → []. Finally, the Mumford and Shah model can also be seen as a deterministic refinement of Geman and Geman’s image restoration model []. . Background: The First Variation In order to better understand, analyze, and use the minimization problem (> .), it is useful to compute its first variation with respect to each of the unknowns. Mumford and Shah Model and its Applications to Image Segmentation and Image Restoration We first recall the definition of Sobolev functions u ∈ W ,(U) [], necessary to properly define a minimizer u when K is fixed. Definition LetU ⊂ Rd be an open set. We denote byW ,(U) (or by H(U)) the set of functions u ∈ L(Ω), whose first-order distributional partial derivatives belong to L(U). This means that there are functions u, . . . ,ud ∈ L(U) such that ∫ U u(x) ∂φ ∂xi (x)dx = − ∫ U ui(x)φ(x)dx for ≤ i ≤ d and for all functions φ ∈ C∞c (U). We may denote by ∂u ∂xi the distributional derivative ui of u and by∇u = ( ∂u ∂x , . . . , ∂u ∂xd ) its distributional gradient. In what follows, we denote by ∣∇u∣(x) the Euclidean norm of the gradient vector at x. H(U) = W ,(U) becomes a Banach space endowed with the norm ∥u∥W ,(U) = ∫ U udx + d ∑ i= ∫ U ( ∂u ∂xi ) dx] / . .. Minimizing in uwith K Fixed Let us assume first that K is fixed, as a closed subset of the open and bounded set Ω ⊂ Rd , and denote by E(u) = μ ∫ Ω/K (u − g)dx + ∫ Ω/K ∣∇u∣dx, for u ∈ W ,(Ω/K), where Ω/K is open and bounded, and g ∈ L(Ω/K). We have the following classical results obtained as a consequence of the standard method of calculus of variations. Proposition There is a unique minimizer of the problem inf u∈W ,(Ω/K) E(u). (.) Proof [] First, we note that ≤ inf E < +∞, since we can choose u ≡ and E(u) = μ ∫Ω/K g (x)dx < +∞. Thus, we can denote by m = inf u E(u) and let {uj} j≥ ∈ W ,(Ω/K) be a minimizing sequence such that lim j→∞ E(uj) = m. Recall that for u, v ∈ L,",
"title": ""
},
{
"docid": "91f718a69532c4193d5e06bf1ea19fd3",
"text": "Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a nontrivial task and requires a lot of expert knowledge. Typically, a new model is developed, a learning algorithm is derived, and the approach has to be implemented.\n Factorization machines (FM) are a generic approach since they can mimic most factorization models just by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain. libFM is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least-squares (ALS) optimization, as well as Bayesian inference using Markov Chain Monto Carlo (MCMC). This article summarizes the recent research on factorization machines both in terms of modeling and learning, provides extensions for the ALS and MCMC algorithms, and describes the software tool libFM.",
"title": ""
},
{
"docid": "bc6c7fcd98160c48cd3b72abff8fad02",
"text": "A new concept of formality of linguistic expressions is introduced and argued to be the most important dimension of variation between styles or registers. Formality is subdivided into \"deep\" formality and \"surface\" formality. Deep formality is defined as avoidance of ambiguity by minimizing the context-dependence and fuzziness of expressions. This is achieved by explicit and precise description of the elements of the context needed to disambiguate the expression. A formal style is characterized by detachment, accuracy, rigidity and heaviness; an informal style is more flexible, direct, implicit, and involved, but less informative. An empirical measure of formality, the F-score, is proposed, based on the frequencies of different word classes in the corpus. Nouns, adjectives, articles and prepositions are more frequent in formal styles; pronouns, adverbs, verbs and interjections are more frequent in informal styles. It is shown that this measure, though coarse-grained, adequately distinguishes more from less formal genres of language production, for some available corpora in Dutch, French, Italian, and English. A factor similar to the F-score automatically emerges as the most important one from factor analyses applied to extensive data in 7 different languages. Different situational and personality factors are examined which determine the degree of formality in linguistic expression. It is proposed that formality becomes larger when the distance in space, time or background between the interlocutors increases, and when the speaker is male, introverted or academically educated. Some empirical evidence and a preliminary theoretical explanation for these propositions is discussed. Short Abstract: The concept of \"deep\" formality is proposed as the most important dimension of variation between language registers or styles. It is defined as avoidance of ambiguity by minimizing the context-dependence and fuzziness of expressions. An empirical measure, the F-score, is proposed, based on the frequencies of different word classes. This measure adequately distinguishes different genres of language production using data for Dutch, French, Italian, and English. Factor analyses applied to data in 7 different languages produce a similar factor as the most important one. Both the data and the theoretical model suggest that formality increases when the distance in space, time or background between the interlocutors increases, and when the speaker is male, introverted or academically educated.",
"title": ""
},
{
"docid": "18d4a0b3b6eceb110b6eb13fde6981c7",
"text": "We simulate the growth of a benign avascular tumour embedded in normal tissue, including cell sorting that occurs between tumour and normal cells, due to the variation of adhesion between diierent cell types. The simulation uses the Potts Model, an energy minimisation method. Trial random movements of cell walls are checked to see if they reduce the adhesion energy of the tissue. These trials are then accepted with Boltzmann weighted probability. The simulated tumour initially grows exponentially, then forms three concentric shells as the nutrient level supplied to the core by diiusion decreases: the outer shell consists of live proliferating cells, the middle of quiescent cells and the centre is a necrotic core, where the nutrient concentration is below the critical level that sustains life. The growth rate of the tumour decreases at the onset of shell formation in agreement with experimental observation. The tumour eventually approaches a steady state, where the increase in volume due to the growth of the proliferating cells equals the loss of volume due to the disintegration of cells in the necrotic core. The nal thickness of the shells also agrees with experiment.",
"title": ""
}
] | scidocsrr |
10a2ef7db2c68903bc4fbd07b4a600de | Online Affect Detection and Robot Behavior Adaptation for Intervention of Children With Autism | [
{
"docid": "0e8e72e35393fca6f334ae2909a4cc74",
"text": "High-functioning children with autism were compared with two control groups on measures of anxiety and social worries. Comparison control groups consisted of children with specific language impairment (SLI) and normally developing children. Each group consisted of 15 children between the ages of 8 and 12 years and were matched for age and gender. Children with autism were found to be most anxious on both measures. High anxiety subscale scores for the autism group were separation anxiety and obsessive-compulsive disorder. These findings are discussed within the context of theories of autism and anxiety in the general population of children. Suggestions for future research are made.",
"title": ""
},
{
"docid": "f1ef345686548b060b70ebc972d51b47",
"text": "Given the importance of implicit communication in human interactions, it would be valuable to have this capability in robotic systems wherein a robot can detect the motivations and emotions of the person it is working with. Recognizing affective states from physiological cues is an effective way of implementing implicit human–robot interaction. Several machine learning techniques have been successfully employed in affect-recognition to predict the affective state of an individual given a set of physiological features. However, a systematic comparison of the strengths and weaknesses of these methods has not yet been done. In this paper, we present a comparative study of four machine learning methods—K-Nearest Neighbor, Regression Tree (RT), Bayesian Network and Support Vector Machine (SVM) as applied to the domain of affect recognition using physiological signals. The results showed that SVM gave the best classification accuracy even though all the methods performed competitively. RT gave the next best classification accuracy and was the most space and time efficient.",
"title": ""
}
] | [
{
"docid": "44b14f681f175027b22150c115d64c44",
"text": "Video segmentation has become an important and active research area with a large diversity of proposed approaches. Graph-based methods, enabling top-performance on recent benchmarks, consist of three essential components: 1. powerful features account for object appearance and motion similarities; 2. spatio-temporal neighborhoods of pixels or superpixels (the graph edges) are modeled using a combination of those features; 3. video segmentation is formulated as a graph partitioning problem. While a wide variety of features have been explored and various graph partition algorithms have been proposed, there is surprisingly little research on how to construct a graph to obtain the best video segmentation performance. This is the focus of our paper. We propose to combine features by means of a classifier, use calibrated classifier outputs as edge weights and define the graph topology by edge selection. By learning the graph (without changes to the graph partitioning method), we improve the results of the best performing video segmentation algorithm by 6% on the challenging VSB100 benchmark, while reducing its runtime by 55%, as the learnt graph is much sparser.",
"title": ""
},
{
"docid": "96423c77c714172e04d375b7ee1e9869",
"text": "This paper presents a body-fixed-sensor-based approach to assess potential sleep apnea patients. A trial involving 15 patients at a sleep unit was undertaken. Vibration sounds were acquired from an accelerometer sensor fixed with a noninvasive mounting on the suprasternal notch of subjects resting in supine position. Respiratory, cardiac, and snoring components were extracted by means of digital signal processing techniques. Mainly, the following biomedical parameters used in new sleep apnea diagnosis strategies were calculated: heart rate, heart rate variability, sympathetic and parasympathetic activity, respiratory rate, snoring rate, pitch associated with snores, and airflow indirect quantification. These parameters were compared to those obtained by means of polysomnography and an accurate microphone. Results demonstrated the feasibility of implementing an accelerometry-based portable device as a simple and cost-effective solution for contributing to the screening of sleep apnea-hypopnea syndrome and other breathing disorders.",
"title": ""
},
{
"docid": "2827e0d197b7f66c7f6ceb846c6aaa27",
"text": "The food industry is becoming more customer-oriented and needs faster response times to deal with food scandals and incidents. Good traceability systems help to minimize the production and distribution of unsafe or poor quality products, thereby minimizing the potential for bad publicity, liability, and recalls. The current food labelling system cannot guarantee that the food is authentic, good quality and safe. Therefore, traceability is applied as a tool to assist in the assurance of food safety and quality as well as to achieve consumer confidence. This paper presents comprehensive information about traceability with regards to safety and quality in the food supply chain. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "662ec285031306816814378e6e192782",
"text": "One task of heterogeneous face recognition is to match a near infrared (NIR) face image to a visible light (VIS) image. In practice, there are often a few pairwise NIR-VIS face images but it is easy to collect lots of VIS face images. Therefore, how to use these unpaired VIS images to improve the NIR-VIS recognition accuracy is an ongoing issue. This paper presents a deep TransfeR NIR-VIS heterogeneous facE recognition neTwork (TRIVET) for NIR-VIS face recognition. First, to utilize large numbers of unpaired VIS face images, we employ the deep convolutional neural network (CNN) with ordinal measures to learn discriminative models. The ordinal activation function (Max-Feature-Map) is used to select discriminative features and make the models robust and lighten. Second, we transfer these models to NIR-VIS domain by fine-tuning with two types of NIR-VIS triplet loss. The triplet loss not only reduces intra-class NIR-VIS variations but also augments the number of positive training sample pairs. It makes fine-tuning deep models on a small dataset possible. The proposed method achieves state-of-the-art recognition performance on the most challenging CASIA NIR-VIS 2.0 Face Database. It achieves a new record on rank-1 accuracy of 95.74% and verification rate of 91.03% at FAR=0.001. It cuts the error rate in comparison with the best accuracy [27] by 69%.",
"title": ""
},
{
"docid": "74290ff01b32423087ce0025625dc445",
"text": "niques is now the world champion computer program in the game of Contract Bridge. As reported in The New York Times and The Washington Post, this program—a new version of Great Game Products’ BRIDGE BARON program—won the Baron Barclay World Bridge Computer Challenge, an international competition hosted in July 1997 by the American Contract Bridge League. It is well known that the game tree search techniques used in computer programs for games such as Chess and Checkers work differently from how humans think about such games. In contrast, our new version of the BRIDGE BARON emulates the way in which a human might plan declarer play in Bridge by using an adaptation of hierarchical task network planning. This article gives an overview of the planning techniques that we have incorporated into the BRIDGE BARON and discusses what the program’s victory signifies for research on AI planning and game playing.",
"title": ""
},
{
"docid": "e7c97ff0a949f70b79fb7d6dea057126",
"text": "Most conventional document categorization methods require a large number of documents with labeled categories for training. These methods are hard to be applied in scenarios, such as scientific publications, where training data is expensive to obtain and categories could change over years and across domains. In this work, we propose UNEC, an unsupervised representation learning model that directly categories documents without the need of labeled training data. Specifically, we develop a novel cascade embedding approach. We first embed concepts, i.e., significant phrases mined from scientific publications, into continuous vectors, which capture concept semantics. Based on the concept similarity graph built from the concept embedding, we further embed concepts into a hidden category space, where the category information of concepts becomes explicit. Finally we categorize documents by jointly considering the category attribution of their concepts. Our experimental results show that UNEC significantly outperforms several strong baselines on a number of real scientific corpora, under both automatic and manual evaluation.",
"title": ""
},
{
"docid": "51165fba0bc57e99069caca5796398c7",
"text": "Reinforcement learning has achieved several successes in sequential decision problems. However, these methods require a large number of training iterations in complex environments. A standard paradigm to tackle this challenge is to extend reinforcement learning to handle function approximation with deep learning. Lack of interpretability and impossibility to introduce background knowledge limits their usability in many safety-critical real-world scenarios. In this paper, we study how to combine reinforcement learning and external knowledge. We derive a rule-based variant version of the Sarsa(λ) algorithm, which we call Sarsarb(λ), that augments data with complex knowledge and exploits similarities among states. We apply our method to a trading task from the Stock Market Environment. We show that the resulting algorithm leads to much better performance but also improves training speed compared to the Deep Qlearning (DQN) algorithm and the Deep Deterministic Policy Gradients (DDPG) algorithm.",
"title": ""
},
{
"docid": "17ba29c670e744d6e4f9e93ceb109410",
"text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.",
"title": ""
},
{
"docid": "26cc29177040461634929eb1fa13395d",
"text": "In this paper, we first characterize distributed real-time systems by the following two properties that have to be supported: best eflorl and leas2 suffering. Then, we propose a distributed real-time object model DRO which complies these properties. Based on the DRO model, we design an object oriented programming language DROL: an extension of C++ with the capa.bility of describing distributed real-time systems. The most eminent feature of DROL is that users can describe on meta level the semantics of message communications as a communication protocol with sending and receiving primitives. With this feature, we can construct a flexible distributed real-time system satisfying specifications which include timing constraints. We implement a runtime system of DROL on the ARTS kernel, and evaluate the efficiency of the prototype implementation as well as confirm the high expressive power of the language.",
"title": ""
},
{
"docid": "13177a7395eed80a77571bd02a962bc9",
"text": "Orexin-A and orexin-B are neuropeptides originally identified as endogenous ligands for two orphan G-protein-coupled receptors. Orexin neuropeptides (also known as hypocretins) are produced by a small group of neurons in the lateral hypothalamic and perifornical areas, a region classically implicated in the control of mammalian feeding behavior. Orexin neurons project throughout the central nervous system (CNS) to nuclei known to be important in the control of feeding, sleep-wakefulness, neuroendocrine homeostasis, and autonomic regulation. orexin mRNA expression is upregulated by fasting and insulin-induced hypoglycemia. C-fos expression in orexin neurons, an indicator of neuronal activation, is positively correlated with wakefulness and negatively correlated with rapid eye movement (REM) and non-REM sleep states. Intracerebroventricular administration of orexins has been shown to significantly increase food consumption, wakefulness, and locomotor activity in rodent models. Conversely, an orexin receptor antagonist inhibits food consumption. Targeted disruption of the orexin gene in mice produces a syndrome remarkably similar to human and canine narcolepsy, a sleep disorder characterized by excessive daytime sleepiness, cataplexy, and other pathological manifestations of the intrusion of REM sleep-related features into wakefulness. Furthermore, orexin knockout mice are hypophagic compared with weight and age-matched littermates, suggesting a role in modulating energy metabolism. These findings suggest that the orexin neuropeptide system plays a significant role in feeding and sleep-wakefulness regulation, possibly by coordinating the complex behavioral and physiologic responses of these complementary homeostatic functions.",
"title": ""
},
{
"docid": "0cd42818f21ada2a8a6c2ed7a0f078fe",
"text": "In perceiving objects we may synthesize conjunctions of separable features by directing attention serially to each item in turn (A. Treisman and G. Gelade, Cognitive Psychology, 1980, 12, 97136). This feature-integration theory predicts that when attention is diverted or overloaded, features may be wrongly recombined, giving rise to “illusory conjunctions.” The present paper confirms that illusory conjunctions are frequently experienced among unattended stimuli varying in color and shape, and that they occur also with size and solidity (outlined versus filled-in shapes). They are shown both in verbal recall and in simultaneous and successive matching tasks, making it unlikely that they depend on verbal labeling or on memory failure. They occur as often between stimuli differing on many features as between more similar stimuli, and spatial separation has little effect on their frequency. Each feature seems to be coded as an independent entity and to migrate, when attention is diverted, with few constraints from the other features of its source or destination.",
"title": ""
},
{
"docid": "853ef57bfa4af5edf4ee3c8a46e4b4f4",
"text": "Hidden properties of social media users, such as their ethnicity, gender, and location, are often reflected in their observed attributes, such as their first and last names. Furthermore, users who communicate with each other often have similar hidden properties. We propose an algorithm that exploits these insights to cluster the observed attributes of hundreds of millions of Twitter users. Attributes such as user names are grouped together if users with those names communicate with other similar users. We separately cluster millions of unique first names, last names, and userprovided locations. The efficacy of these clusters is then evaluated on a diverse set of classification tasks that predict hidden users properties such as ethnicity, geographic location, gender, language, and race, using only profile names and locations when appropriate. Our readily-replicable approach and publiclyreleased clusters are shown to be remarkably effective and versatile, substantially outperforming state-of-the-art approaches and human accuracy on each of the tasks studied.",
"title": ""
},
{
"docid": "77f60100af0c9556e5345ee1b04d8171",
"text": "SDNET2018 is an annotated image dataset for training, validation, and benchmarking of artificial intelligence based crack detection algorithms for concrete. SDNET2018 contains over 56,000 images of cracked and non-cracked concrete bridge decks, walls, and pavements. The dataset includes cracks as narrow as 0.06 mm and as wide as 25 mm. The dataset also includes images with a variety of obstructions, including shadows, surface roughness, scaling, edges, holes, and background debris. SDNET2018 will be useful for the continued development of concrete crack detection algorithms based on deep convolutional neural networks (DCNNs), which are a subject of continued research in the field of structural health monitoring. The authors present benchmark results for crack detection using SDNET2018 and a crack detection algorithm based on the AlexNet DCNN architecture. SDNET2018 is freely available at https://doi.org/10.15142/T3TD19.",
"title": ""
},
{
"docid": "e8f431676ed0a85cb09a6462303a3ec7",
"text": "This paper describes Champollion, a lexicon-based sentence aligner designed for robust alignment of potential noisy parallel text. Champollion increases the robustness of the alignment by assigning greater weights to less frequent translated words. Experiments on a manually aligned Chinese – English parallel corpus show that Champollion achieves high precision and recall on noisy data. Champollion can be easily ported to new language pairs. It’s freely available to the public.",
"title": ""
},
{
"docid": "e3b473dbff892af0175a73275c770f7d",
"text": "Spacecraft require all manner of both digital and analog circuits. Onboard digital systems are constructed almost exclusively from field-programmable gate array (FPGA) circuits providing numerous advantages over discrete design including high integration density, high reliability, fast turn-around design cycle time, lower mass, volume, and power consumption, and lower parts acquisition and flight qualification costs. Analog and mixed-signal circuits perform tasks ranging from housekeeping to signal conditioning and processing. These circuits are painstakingly designed and built using discrete components due to a lack of options for field-programmability. FPAA (Field-Programmable Analog Array) and FPMA (Field-Programmable Mixed-signal Array) parts exist [1] but not in radiation-tolerant technology and not necessarily in an architecture optimal for the design of analog circuits for spaceflight applications. This paper outlines an architecture proposed for an FPAA fabricated in an existing commercial digital CMOS process used to make radiation-tolerant antifuse-based FPGA devices. The primary concerns are the impact of the technology and the overall array architecture on the flexibility of programming, the bandwidth available for high-speed analog circuits, and the accuracy of the components for highperformance applications.",
"title": ""
},
{
"docid": "774df4733d98b781f32222cf843ec381",
"text": "This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a nonlinear transformation between the joint feature/label space distributions of the two domain Ps and Pt that can be estimated with optimal transport. We propose a solution of this problem that allows to recover an estimated target P t = (X, f(X)) by optimizing simultaneously the optimal coupling and f . We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.",
"title": ""
},
{
"docid": "b7a08eaeb69fa6206cb9aec9cc54f2c3",
"text": "This paper describes a computational pragmatic model which is geared towards providing helpful answers to modal and hypothetical questions. The work brings together elements from fonna l . semantic theories on modality m~d question answering, defines a wkler, pragmatically flavoured, notion of answerhood based on non-monotonic inference aod develops a notion of context, within which aspects of more cognitively oriented theories, such as Relevance Theory, can be accommodated. The model has been inlplemented. The research was fundexl by ESRC grant number R000231279.",
"title": ""
},
{
"docid": "ca905aef2477905783f7d18be841f99b",
"text": "PURPOSE\nHumans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit.\n\n\nMETHODS\nIn experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field.\n\n\nRESULTS\nPursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.",
"title": ""
},
{
"docid": "3907bddf6a56b96c4e474d46ddd04359",
"text": "The aim of this review is to discuss the accumulating evidence that suggests that grape extracts and purified grape polyphenols possess a diverse array of biological actions and may be beneficial in the prevention of some inflammatory-mediated diseases including cardiovascular disease. The active components from grape extracts, which include the grape seed, grape skin, and grape juice, that have been identified thus far include polyphenols such as resveratrol, phenolic acids, anthocyanins, and flavonoids. All possess potent antioxidant properties and have been shown to decrease low-density lipoprotein-cholesterol oxidation and platelet aggregation. These compounds also possess a range of additional cardioprotective and vasoprotective properties including antiatherosclerotic, antiarrhythmic, and vasorelaxation actions. Although not exclusive, antioxidant properties of grape polyphenols are likely to be central to their mechanism(s) of action, which also include cellular signaling mechanisms and interactions at the genomic level. This review discusses some of the evidence favoring the consumption of grape extracts rich in polyphenols in the prevention of cardiovascular disease. Consumption of grape and grape extracts and/or grape products such as red wine may be beneficial in preventing the development of chronic degenerative diseases such as cardiovascular disease.",
"title": ""
},
{
"docid": "024cc15c164656f90ade55bf3c391405",
"text": "Unmanned aerial vehicles (UAVs), also known as drones have many applications and they are a current trend across many industries. They can be used for delivery, sports, surveillance, professional photography, cinematography, military combat, natural disaster assistance, security, and the list grows every day. Programming opens an avenue to automate many processes of daily life and with the drone as aerial programmable eyes, security and surveillance can become more efficient and cost effective. At Barry University, parking is becoming an issue as the number of people visiting the school greatly outnumbers the convenient parking locations. This has caused a multitude of hazards in parking lots due to people illegally parking, as well as unregistered vehicles parking in reserved areas. In this paper, we explain how automated drone surveillance is utilized to detect unauthorized parking at Barry University. The automated process is incorporated into Java application and completed in three steps: collecting visual data, processing data automatically, and sending automated responses and queues to the operator of the system.",
"title": ""
}
] | scidocsrr |
80df194bf7f0aedd9a14fb55de2b3856 | The Body and the Beautiful: Health, Attractiveness and Body Composition in Men’s and Women’s Bodies | [
{
"docid": "6210a0a93b97a12c2062ac78953f3bd1",
"text": "This article proposes a contextual-evolutionary theory of human mating strategies. Both men and women are hypothesized to have evolved distinct psychological mechanisms that underlie short-term and long-term strategies. Men and women confront different adaptive problems in short-term as opposed to long-term mating contexts. Consequently, different mate preferences become activated from their strategic repertoires. Nine key hypotheses and 22 predictions from Sexual Strategies Theory are outlined and tested empirically. Adaptive problems sensitive to context include sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment. Discussion summarizes 6 additional sources of behavioral data, outlines adaptive problems common to both sexes, and suggests additional contexts likely to cause shifts in mating strategy.",
"title": ""
}
] | [
{
"docid": "dabbcd5d79b011b7d091ef3a471d9779",
"text": "This paper borrows ideas from social science to inform the design of novel \"sensing\" user-interfaces for computing technology. Specifically, we present five design challenges inspired by analysis of human-human communication that are mundanely addressed by traditional graphical user interface designs (GUIs). Although classic GUI conventions allow us to finesse these questions, recent research into innovative interaction techniques such as 'Ubiquitous Computing' and 'Tangible Interfaces' has begun to expose the interaction challenges and problems they pose. By making them explicit we open a discourse on how an approach similar to that used by social scientists in studying human-human interaction might inform the design of novel interaction mechanisms that can be used to handle human-computer communication accomplishments",
"title": ""
},
{
"docid": "9d2ec490b7efb23909abdbf5f209f508",
"text": "Terrestrial Laser scanner (TLS) has been widely used in our recent architectural heritage projects and huge quantity of point cloud data was gotten. In order to process the huge quantity of point cloud data effectively and reconstruct their 3D models, more effective methods should be developed based on existing automatic or semiautomatic point cloud processing algorithms. Here introduce a new algorithm for rapid extracting the pillar features of Chinese ancient buildings from their point cloud data, the algorithm has the least human interaction in the data processing and is more efficient to extract pillars from point cloud data than existing feature extracting algorithms. With this algorithm we identify the pillar features by dividing the point cloud into slices firstly, and then get the projective parameters of pillar objects in selected slices, the next compare the local projective parameters in adjacent slices, the next combine them to get the global parameters of the pillars and at last reconstruct the 3d pillar models.",
"title": ""
},
{
"docid": "bd3717bd46869b9be3153478cbd19f2a",
"text": "The study was conducted to assess the effectiveness of jasmine oil massage on labour pain during first stage of labour among 40 primigravida women. The study design adopted was true experimental approach with pre-test post-test control group design. The demographic Proforma were collected from the women by interview and Visual analogue scale was used to measure the level of labour pain in both the groups. Data obtained in these areas were analysed by descriptive and inferential statistics. A significant difference was found in the experimental group( t 9.869 , p<0.05) . A significant difference was found between experimental group and control group. cal",
"title": ""
},
{
"docid": "4bd123c2c44e703133e9a6093170db39",
"text": "This paper presents a single-phase cascaded H-bridge converter for a grid-connected photovoltaic (PV) application. The multilevel topology consists of several H-bridge cells connected in series, each one connected to a string of PV modules. The adopted control scheme permits the independent control of each dc-link voltage, enabling, in this way, the tracking of the maximum power point for each string of PV panels. Additionally, low-ripple sinusoidal-current waveforms are generated with almost unity power factor. The topology offers other advantages such as the operation at lower switching frequency or lower current ripple compared to standard two-level topologies. Simulation and experimental results are presented for different operating conditions.",
"title": ""
},
{
"docid": "e637dc1aee0632f61a29c8609187a98b",
"text": "Scene coordinate regression has become an essential part of current camera re-localization methods. Different versions, such as regression forests and deep learning methods, have been successfully applied to estimate the corresponding camera pose given a single input image. In this work, we propose to regress the scene coordinates pixel-wise for a given RGB image by using deep learning. Compared to the recent methods, which usually employ RANSAC to obtain a robust pose estimate from the established point correspondences, we propose to regress confidences of these correspondences, which allows us to immediately discard erroneous predictions and improve the initial pose estimates. Finally, the resulting confidences can be used to score initial pose hypothesis and aid in pose refinement, offering a generalized solution to solve this task.",
"title": ""
},
{
"docid": "7ce9ef05d3f4a92f6b187d7986b70be1",
"text": "With the growth in the consumer electronics industry, it is vital to develop an algorithm for ultrahigh definition products that is more effective and has lower time complexity. Image interpolation, which is based on an autoregressive model, has achieved significant improvements compared with the traditional algorithm with respect to image reconstruction, including a better peak signal-to-noise ratio (PSNR) and improved subjective visual quality of the reconstructed image. However, the time-consuming computation involved has become a bottleneck in those autoregressive algorithms. Because of the high time cost, image autoregressive-based interpolation algorithms are rarely used in industry for actual production. In this study, in order to meet the requirements of real-time reconstruction, we use diverse compute unified device architecture (CUDA) optimization strategies to make full use of the graphics processing unit (GPU) (NVIDIA Tesla K80), including a shared memory and register and multi-GPU optimization. To be more suitable for the GPU-parallel optimization, we modify the training window to obtain a more concise matrix operation. Experimental results show that, while maintaining a high PSNR and subjective visual quality and taking into account the I/O transfer time, our algorithm achieves a high speedup of 147.3 times for a Lena image and 174.8 times for a 720p video, compared to the original single-threaded C CPU code with -O2 compiling optimization.",
"title": ""
},
{
"docid": "a8d6a864092b3deb58be27f0f76b02c2",
"text": "High-quality word representations have been very successful in recent years at improving performance across a variety of NLP tasks. These word representations are the mappings of each word in the vocabulary to a real vector in the Euclidean space. Besides high performance on specific tasks, learned word representations have been shown to perform well on establishing linear relationships among words. The recently introduced skipgram model improved performance on unsupervised learning of word embeddings that contains rich syntactic and semantic word relations both in terms of accuracy and speed. Word embeddings that have been used frequently on English language, is not applied to Turkish yet. In this paper, we apply the skip-gram model to a large Turkish text corpus and measured the performance of them quantitatively with the \"question\" sets that we generated. The learned word embeddings and the question sets are publicly available at our website. Keywords—Word embeddings, Natural Language Processing, Deep Learning",
"title": ""
},
{
"docid": "67a3f92ab8c5a6379a30158bb9905276",
"text": "We present a compendium of recent and current projects that utilize crowdsourcing technologies for language studies, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior. While crowdsourcing has primarily been used for annotation in recent language studies, the results here demonstrate that far richer data may be generated in a range of linguistic disciplines from semantics to psycholinguistics. For these, we report a number of successful methods for evaluating data quality in the absence of a ‘correct’ response for any given data point.",
"title": ""
},
{
"docid": "41d32df9d58f9c38f75010c87c0c3327",
"text": "Evidence from many countries in recent years suggests that collateral values and recovery rates on corporate defaults can be volatile and, moreover, that they tend to go down just when the number of defaults goes up in economic downturns. This link between recovery rates and default rates has traditionally been neglected by credit risk models, as most of them focused on default risk and adopted static loss assumptions, treating the recovery rate either as a constant parameter or as a stochastic variable independent from the probability of default. This traditional focus on default analysis has been partly reversed by the recent significant increase in the number of studies dedicated to the subject of recovery rate estimation and the relationship between default and recovery rates. This paper presents a detailed review of the way credit risk models, developed during the last thirty years, treat the recovery rate and, more specifically, its relationship with the probability of default of an obligor. We also review the efforts by rating agencies to formally incorporate recovery ratings into their assessment of corporate loan and bond credit risk and the recent efforts by the Basel Committee on Banking Supervision to consider “downturn LGD” in their suggested requirements under Basel II. Recent empirical evidence concerning these issues and the latest data on high-yield bond and leverage loan defaults is also presented and discussed.",
"title": ""
},
{
"docid": "db36273a3669e1aeda1bf2c5ab751387",
"text": "Autonomous Ground Vehicles designed for dynamic environments require a reliable perception of the real world, in terms of obstacle presence, position and speed. In this paper we present a flexible technique to build, in real time, a dense voxel-based map from a 3D point cloud, able to: (1) discriminate between stationary and moving obstacles; (2) provide an approximation of the detected obstacle's absolute speed using the information of the vehicle's egomotion computed through a visual odometry approach. The point cloud is first sampled into a full 3D map based on voxels to preserve the tridimensional information; egomotion information allows computational efficiency in voxels creation; then voxels are processed using a flood fill approach to segment them into a clusters structure; finally, with the egomotion information, the obtained clusters are labeled as stationary or moving obstacles, and an estimation of their speed is provided. The algorithm runs in real time; it has been tested on one of VisLab's AGVs using a modified SGM-based stereo system as 3D data source.",
"title": ""
},
{
"docid": "01962e512740addbe5f444ed581ebb48",
"text": "We investigate how neural, encoder-decoder translation systems output target strings of appropriate lengths, finding that a collection of hidden units learns to explicitly implement this functionality.",
"title": ""
},
{
"docid": "262c11ab9f78e5b3f43a31ad22cf23c5",
"text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.",
"title": ""
},
{
"docid": "1a0ed30b64fa7f8d39a12acfcadfd763",
"text": "This letter presents a smart shelf configuration for radio frequency identification (RFID) application. The proposed shelf has an embedded leaking microstrip transmission line with extended ground plane. This structure, when connected to an RFID reader, allows detecting tagged objects in close proximity with proper field confinement to avoid undesired reading of neighboring shelves. The working frequency band covers simultaneously the three world assigned RFID subbands at ultrahigh frequency (UHF). The concept is explored by full-wave simulations and it is validated with thorough experimental tests.",
"title": ""
},
{
"docid": "ff8089430cdae3e733b06a7aa1b759b4",
"text": "We derive a model for consumer loan default and credit card expenditure. The default model is based on statistical models for discrete choice, in contrast to the usual procedure of linear discriminant analysis. The model is then extended to incorporate the default probability in a model of expected profit. The technique is applied to a large sample of applications and expenditure from a major credit card company. The nature of the data mandates the use of models of sample selection for estimation. The empirical model for expected profit produces an optimal acceptance rate for card applications which is far higher than the observed rate used by the credit card vendor based on the discriminant analysis. I am grateful to Terry Seaks for valuable comments on an earlier draft of this paper and to Jingbin Cao for his able research assistance. The provider of the data and support for this project has requested anonymity, so I must thank them as such. Their help and support are gratefully acknowledged. Participants in the applied econometrics workshop at New York University also provided useful commentary.",
"title": ""
},
{
"docid": "fb2287cb1c41441049288335f10fd473",
"text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly",
"title": ""
},
{
"docid": "92da117d31574246744173b339b0d055",
"text": "We present a method for gesture detection and localization based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at two temporal scales. Key to our technique is a training strategy which exploits i) careful initialization of individual modalities; and ii) gradual fusion of modalities from strongest to weakest cross-modality structure. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams.",
"title": ""
},
{
"docid": "bf294a4c3af59162b2f401e2cdcb060b",
"text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.",
"title": ""
},
{
"docid": "10318d39b3ad18779accbf29b2f00fcd",
"text": "Designing convolutional neural networks (CNN) models for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant effort has been dedicated to design and improve mobile models on all three dimensions, it is challenging to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated neural architecture search approach for designing resourceconstrained mobile CNN models. We propose to explicitly incorporate latency information into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike in previous work, where mobile latency is considered via another, often inaccurate proxy (e.g., FLOPS), in our experiments, we directly measure real-world inference latency by executing the model on a particular platform, e.g., Pixel phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that permits layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our model achieves 74.0% top-1 accuracy with 76ms latency on a Pixel phone, which is 1.5× faster than MobileNetV2 (Sandler et al. 2018) and 2.4× faster than NASNet (Zoph et al. 2018) with the same top-1 accuracy. On the COCO object detection task, our model family achieves both higher mAP quality and lower latency than MobileNets.",
"title": ""
},
{
"docid": "f6a9670544a784a5fc431746557473a3",
"text": "Massive multiple-input multiple-output (MIMO) systems are cellular networks where the base stations (BSs) are equipped with unconventionally many antennas, deployed on co-located or distributed arrays. Huge spatial degrees-of-freedom are achieved by coherent processing over these massive arrays, which provide strong signal gains, resilience to imperfect channel knowledge, and low interference. This comes at the price of more infrastructure; the hardware cost and circuit power consumption scale linearly/affinely with the number of BS antennas N. Hence, the key to cost-efficient deployment of large arrays is low-cost antenna branches with low circuit power, in contrast to today's conventional expensive and power-hungry BS antenna branches. Such low-cost transceivers are prone to hardware imperfections, but it has been conjectured that the huge degrees-of-freedom would bring robustness to such imperfections. We prove this claim for a generalized uplink system with multiplicative phase-drifts, additive distortion noise, and noise amplification. Specifically, we derive closed-form expressions for the user rates and a scaling law that shows how fast the hardware imperfections can increase with N while maintaining high rates. The connection between this scaling law and the power consumption of different transceiver circuits is rigorously exemplified. This reveals that one can make √N the circuit power increase as N, instead of linearly, by careful circuit-aware system design.",
"title": ""
},
{
"docid": "fa20b9427a8dcfd8db90e0a6eb5e7d8c",
"text": "Recent functional brain imaging studies suggest that object concepts may be represented, in part, by distributed networks of discrete cortical regions that parallel the organization of sensory and motor systems. In addition, different regions of the left lateral prefrontal cortex, and perhaps anterior temporal cortex, may have distinct roles in retrieving, maintaining and selecting semantic information.",
"title": ""
}
] | scidocsrr |
8a554c3c8fa54e27e80b2a2fb5b22d44 | Near-Optimal Algorithms for the Assortment Planning Problem Under Dynamic Substitution and Stochastic Demand | [
{
"docid": "650209a7310ce7506f6384ad42db44f3",
"text": "In this paper, we examine the nature of optimal inventory policies in a system where a retailer manages substitutable products. We first consider a system with two products 1 and 2 whose total demand is D and individual demand proportions are p and (1-p). A fixed proportion of the unsatisfied customers for 1(2) will purchase item 2 (1), if it is available in inventory. For the single period case, we show that the optimal inventory levels of the two items can be computed easily and follow what we refer to as \"partially decoupled\" policies, i.e. base stock policies that are not state dependent, in certain critical regions of interest both when D is known and random. Furthermore, we show that such a partially decoupled base-stock policy is optimal even in a multi-period version of the problem for known D for a wide range of parameter values. Using a numerical study, we show that heuristics based on the de-coupled inventory policies perform well in conditions more general than the ones assumed to obtain the analytical results. The analytical and numerical results suggest that the approach presented here is most valuable in retail settings for product categories where there is a moderate level of substitution between items in the category, demand variation at the category level is not too high and service levels are high.",
"title": ""
}
] | [
{
"docid": "6b7de13e2e413885e0142e3b6bf61dc9",
"text": "OBJECTIVE\nTo compare the healing at elevated sinus floors augmented either with deproteinized bovine bone mineral (DBBM) or autologous bone grafts and followed by immediate implant installation.\n\n\nMATERIAL AND METHODS\nTwelve albino New Zealand rabbits were used. Incisions were performed along the midline of the nasal dorsum. The nasal bone was exposed. A circular bony widow with a diameter of 3 mm was prepared bilaterally, and the sinus mucosa was detached. Autologous bone (AB) grafts were collected from the tibia. Similar amounts of AB or DBBM granules were placed below the sinus mucosa. An implant with a moderately rough surface was installed into the elevated sinus bilaterally. The animals were sacrificed after 7 (n = 6) or 40 days (n = 6).\n\n\nRESULTS\nThe dimensions of the elevated sinus space at the DBBM sites were maintained, while at the AB sites, a loss of 2/3 was observed between 7 and 40 days of healing. The implants showed similar degrees of osseointegration after 7 (7.1 ± 1.7%; 9.9 ± 4.5%) and 40 days (37.8 ± 15%; 36.0 ± 11.4%) at the DBBM and AB sites, respectively. Similar amounts of newly formed mineralized bone were found in the elevated space after 7 days at the DBBM (7.8 ± 6.6%) and AB (7.2 ± 6.0%) sites while, after 40 days, a higher percentage of bone was found at AB (56.7 ± 8.8%) compared to DBBM (40.3 ± 7.5%) sites.\n\n\nCONCLUSIONS\nBoth Bio-Oss® granules and autologous bone grafts contributed to the healing at implants installed immediately in elevated sinus sites in rabbits. Bio-Oss® maintained the dimensions, while autologous bone sites lost 2/3 of the volume between the two periods of observation.",
"title": ""
},
{
"docid": "13091eb3775715269b7bee838f0a6b00",
"text": "Smartphones can now connect to a variety of external sensors over wired and wireless channels. However, ensuring proper device interaction can be burdensome, especially when a single application needs to integrate with a number of sensors using different communication channels and data formats. This paper presents a framework to simplify the interface between a variety of external sensors and consumer Android devices. The framework simplifies both application and driver development with abstractions that separate responsibilities between the user application, sensor framework, and device driver. These abstractions facilitate a componentized framework that allows developers to focus on writing minimal pieces of sensor-specific code enabling an ecosystem of reusable sensor drivers. The paper explores three alternative architectures for application-level drivers to understand trade-offs in performance, device portability, simplicity, and deployment ease. We explore these tradeoffs in the context of four sensing applications designed to support our work in the developing world. They highlight a range of sensor usage models for our application-level driver framework that vary data types, configuration methods, communication channels, and sampling rates to demonstrate the framework's effectiveness.",
"title": ""
},
{
"docid": "ce63aad5288d118eb6ca9d99b96e9cac",
"text": "Unknown malware has increased dramatically, but the existing security software cannot identify them effectively. In this paper, we propose a new malware detection and classification method based on n-grams attribute similarity. We extract all n-grams of byte codes from training samples and select the most relevant as attributes. After calculating the average value of attributes in malware and benign separately, we determine a test sample is malware or benign by attribute similarity between attributes of the test sample and the two average attributes of malware and benign. We compare our method with a variety of machine learning methods, including Naïve Bayes, Bayesian Networks, Support Vector Machine and C4.5 Decision Tree. Experimental results on public (Open Malware Benchmark) and private (self-collected) datasets both reveal that our method outperforms the other four methods.",
"title": ""
},
{
"docid": "ee9730fa0fde945d70130bcf33960608",
"text": "An operational definition offered in this paper posits learning as a multi-dimensional and multi-phase phenomenon occurring when individuals attempt to solve what they view as a problem. To model someone’s learning accordingly to the definition, it suffices to characterize a particular sequence of that person’s disequilibrium–equilibrium phases in terms of products of a particular mental act, the characteristics of the mental act inferred from the products, and intellectual and psychological needs that instigate or result from these phases. The definition is illustrated by analysis of change occurring in three thinking-aloud interviews with one middle-school teacher. The interviews were about the same task: “Make up a word problem whose solution may be found by computing 4/5 divided by 2/3.” © 2010 Elsevier Inc. All rights reserved. An operational definition is a showing of something—such as a variable, term, or object—in terms of the specific process or set of validation tests used to determine its presence and quantity. Properties described in this manner must be publicly accessible so that persons other than the definer can independently measure or test for them at will. An operational definition is generally designed to model a conceptual definition (Wikipedia)",
"title": ""
},
{
"docid": "4d8f38413169a572c0087fd180a97e44",
"text": "As continued scaling of silicon FETs grows increasingly challenging, alternative paths for improving digital system energy efficiency are being pursued. These paths include replacing the transistor channel with emerging nanomaterials (such as carbon nanotubes), as well as utilizing negative capacitance effects in ferroelectric materials in the FET gate stack, e.g., to improve sub-threshold slope beyond the 60 mV/decade limit. However, which path provides the largest energy efficiency benefits—and whether these multiple paths can be combined to achieve additional energy efficiency benefits—is still unclear. Here, we experimentally demonstrate the first negative capacitance carbon nanotube FETs (CNFETs), combining the benefits of both carbon nanotube channels and negative capacitance effects. We demonstrate negative capacitance CNFETs, achieving sub-60 mV/decade sub-threshold slope with an average sub-threshold slope of 55 mV/decade at room temperature. The average ON-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{ON}}$ </tex-math></inline-formula>) of these negative capacitance CNFETs improves by <inline-formula> <tex-math notation=\"LaTeX\">$2.1\\times $ </tex-math></inline-formula> versus baseline CNFETs, (i.e., without negative capacitance) for the same OFF-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{OFF}}$ </tex-math></inline-formula>). This work demonstrates a promising path forward for future generations of energy-efficient electronic systems.",
"title": ""
},
{
"docid": "bf0471fc0c513e9771cbedecc39110e1",
"text": "For emotion recognition, we selected pitch, log energy, formant, mel-band energies, and mel frequency cepstral coefficients (MFCCs) as the base features, and added velocity/acceleration of pitch and MFCCs to form feature streams. We extracted statistics used for discriminative classifiers, assuming that each stream is a one-dimensional signal. Extracted features were analyzed by using quadratic discriminant analysis (QDA) and support vector machine (SVM). Experimental results showed that pitch and energy were the most important factors. Using two different kinds of databases, we compared emotion recognition performance of various classifiers: SVM, linear discriminant analysis (LDA), QDA and hidden Markov model (HMM). With the text-independent SUSAS database, we achieved the best accuracy of 96.3% for stressed/neutral style classification and 70.1% for 4-class speaking style classification using Gaussian SVM, which is superior to the previous results. With the speaker-independent AIBO database, we achieved 42.3% accuracy for 5-class emotion recognition.",
"title": ""
},
{
"docid": "7ad76f9f584b33ffd85b8e5c3bf50e92",
"text": "Deep residual learning (ResNet) (He et al., 2016) is a new method for training very deep neural networks using identity mapping for shortcut connections. ResNet has won the ImageNet ILSVRC 2015 classification task, and achieved state-of-theart performances in many computer vision tasks. However, the effect of residual learning on noisy natural language processing tasks is still not well understood. In this paper, we design a novel convolutional neural network (CNN) with residual learning, and investigate its impacts on the task of distantly supervised noisy relation extraction. In contradictory to popular beliefs that ResNet only works well for very deep networks, we found that even with 9 layers of CNNs, using identity mapping could significantly improve the performance for distantly-supervised relation extraction.",
"title": ""
},
{
"docid": "f058b13088ca0f38e350cb8c8ffb0c0f",
"text": "In this paper, we propose a representation learning research framework for document-level sentiment analysis. Given a document as the input, document-level sentiment analysis aims to automatically classify its sentiment/opinion (such as thumbs up or thumbs down) based on the textural information. Despite the success of feature engineering in many previous studies, the hand-coded features do not well capture the semantics of texts. In this research, we argue that learning sentiment-specific semantic representations of documents is crucial for document-level sentiment analysis. We decompose the document semantics into four cascaded constitutes: (1) word representation, (2) sentence structure, (3) sentence composition and (4) document composition. Specifically, we learn sentiment-specific word representations, which simultaneously encode the contexts of words and the sentiment supervisions of texts into the continuous representation space. According to the principle of compositionality, we learn sentiment-specific sentence structures and sentence-level composition functions to produce the representation of each sentence based on the representations of the words it contains. The semantic representations of documents are obtained through document composition, which leverages the sentiment-sensitive discourse relations and sentence representations.",
"title": ""
},
{
"docid": "eaf2a943ca3cf2b837eb5c1cae29a37a",
"text": "The natural immune system is a subject of great research interest because of its powerful information processing capabilities. From an informationprocessing perspective, the immune system is a highly parallel system. It provides an excellent model of adaptive processes operating at the local level and of useful behavior emerging at the global level. Moreover, it uses learning, memory, and assodative retrieval to salve recognition and classification tasks. This chapter illustrates different immunological mechanisms and their relation to information processing, and provides an overview of the rapidly emerging field called Artificial Immune Systems. These techniques have been successfully used in pattern recognition, fault detection and diagnosis, computer security, and a variety of other applications.",
"title": ""
},
{
"docid": "abe729a351eb9dbc1688abe5133b28c2",
"text": "C. H. Tian B. K. Ray J. Lee R. Cao W. Ding This paper presents a framework for the modeling and analysis of business model designs involving a network of interconnected business entities. The framework includes an ecosystem-modeling component, a simulation component, and a serviceanalysis component, and integrates methods from value network modeling, game theory analysis, and multiagent systems. A role-based paradigm is introduced for characterizing ecosystem entities in order to easily allow for the evolution of the ecosystem and duplicated functionality for entities. We show how the framework can be used to provide insight into value distribution among the entities and evaluation of business model performance under different scenarios. The methods are illustrated using a case study of a retail business-to-business service ecosystem.",
"title": ""
},
{
"docid": "821b6ce6e6d51e9713bb44c4c9bf8cf0",
"text": "Rapidly destructive arthritis (RDA) of the shoulder is a rare disease. Here, we report two cases, with different destruction patterns, which were most probably due to subchondral insufficiency fractures (SIFs). Case 1 involved a 77-year-old woman with right shoulder pain. Rapid destruction of both the humeral head and glenoid was seen within 1 month of the onset of shoulder pain. We diagnosed shoulder RDA and performed a hemiarthroplasty. Case 2 involved a 74-year-old woman with left shoulder pain. Humeral head collapse was seen within 5 months of pain onset, without glenoid destruction. Magnetic resonance imaging showed a bone marrow edema pattern with an associated subchondral low-intensity band, typical of SIF. Total shoulder arthroplasty was performed in this case. Shoulder RDA occurs as a result of SIF in elderly women; the progression of the joint destruction is more rapid in cases with SIFs of both the humeral head and the glenoid. Although shoulder RDA is rare, this disease should be included in the differential diagnosis of acute onset shoulder pain in elderly female patients with osteoporosis and persistent joint effusion.",
"title": ""
},
{
"docid": "ece5b4cecc78b115d6e8824f91a45dc6",
"text": "The ability to edit materials of objects in images is desirable by many content creators. However, this is an extremely challenging task as it requires to disentangle intrinsic physical properties of an image. We propose an end-to-end network architecture that replicates the forward image formation process to accomplish this task. Specifically, given a single image, the network first predicts intrinsic properties, i.e. shape, illumination, and material, which are then provided to a rendering layer. This layer performs in-network image synthesis, thereby enabling the network to understand the physics behind the image formation process. The proposed rendering layer is fully differentiable, supports both diffuse and specular materials, and thus can be applicable in a variety of problem settings. We demonstrate a rich set of visually plausible material editing examples and provide an extensive comparative study.",
"title": ""
},
{
"docid": "0a2cba5e6d5b6b467e34e79ee099f509",
"text": "Wearable devices are used in various applications to collect information including step information, sleeping cycles, workout statistics, and health-related information. Due to the nature and richness of the data collected by such devices, it is important to ensure the security of the collected data. This paper presents a new lightweight authentication scheme suitable for wearable device deployment. The scheme allows a user to mutually authenticate his/her wearable device(s) and the mobile terminal (e.g., Android and iOS device) and establish a session key among these devices (worn and carried by the same user) for secure communication between the wearable device and the mobile terminal. The security of the proposed scheme is then demonstrated through the broadly accepted real-or-random model, as well as using the popular formal security verification tool, known as the Automated validation of Internet security protocols and applications. Finally, we present a comparative summary of the proposed scheme in terms of the overheads such as computation and communication costs, security and functionality features of the proposed scheme and related schemes, and also the evaluation findings from the NS2 simulation.",
"title": ""
},
{
"docid": "0f2023682deaf2eb70c7becd8b3375dd",
"text": "Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and realworld datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions.",
"title": ""
},
{
"docid": "78fa87e54c9f6c49101e0079013792e2",
"text": "The NCSM Journal of Mathematics Education Leadership is published at least twice yearly, in the spring and fall. Permission to photocopy material from the NCSM Journal of Mathematics Education Leadership is granted for instructional use when the material is to be distributed free of charge (or at cost only), provided that it is duplicated with the full credit given to the authors of the materials and the NCSM Journal of Mathematics Education Leadership. This permission does not apply to copyrighted articles reprinted in the NCSM Journal of Mathematics Education Leadership. The editors of the NCSM Journal of Mathematics Education Leadership are interested in manuscripts that address concerns of leadership in mathematics rather than those of content or delivery. Editors are interested in publishing articles from a broad spectrum of formal and informal leaders who practice at local, regional, national, and international levels. Categories for submittal include: Note: The last two categories are intended for short pieces of 2 to 3 pages in length. Submittal of items should be done electronically to the Journal editor. Do not put any author identification in the body of the item being submitted, but do include author information as you would like to see it in the Journal. Items submitted for publication will be reviewed by two members of the NCSM Review Panel and one editor with comments and suggested revisions sent back to the author at least six weeks before publication. Final copy must be agreed to at least three weeks before publication. Cover image: A spiral vortex generated with fractal algorithms • Strengthening mathematics education leadership through the dissemination of knowledge related to research, issues, trends, programs, policy, and practice in mathematics education • Fostering inquiry into key challenges of mathematics education leadership • Raising awareness about key challenges of mathematics education leadership, in order to influence research, programs, policy, and practice • Engaging the attention and support of other education stakeholders, and business and government, in order to broaden as well as strengthen mathematics education leadership E arlier this year, NCSM released a new mission and vision statement. Our mission speaks to our commitment to \" support and sustain improved student achievement through the development of leadership skills and relationships among current and future mathematics leaders. \" Our vision statement challenges us as the leaders in mathematics education to collaborate with all stakeholders and develop leadership skills that will lead to improved …",
"title": ""
},
{
"docid": "60b1f54b968127c1673fbaae5ae03463",
"text": "The wireless networking environment presents formidable challenges to the study of broadcasting and multicasting problems. After addressing the characteristics of wireless networks that distinguish them from wired networks, we introduce and evaluate algorithms for tree construction in infrastructureless, all-wireless applications. The performance metric used to evaluate broadcast and multicast trees is energyefficiency. We develop the Broadcast Incremental Power Algorithm, and adapt it to multicast operation as well. This algorithm exploits the broadcast nature of the wireless communication environment, and addresses the need for energy-efficient operation. We demonstrate that our algorithm provides better performance than algorithms that have been developed for the link-based, wired environment.",
"title": ""
},
{
"docid": "74953f4d53af99b937d8128b7ab8f64c",
"text": "This paper presents a force-based control mode for a hand exoskeleton. This device has been developed with focus on support of the rehabilitation process after hand injuries or strokes. As the device is designed for the later use on patients, which have limited hand mobility, fast undesired movements have to be averted. Safety precautions in the hardware and software design of the system must be taken to ensure this. The construction allows controlling motions of the finger joints. However, due to friction in gears and mechanical construction, it is not possible to move finger joints within the construction without help of actuators. Therefore force sensors are integrated into the construction to sense force exchanged between human and exoskeleton. These allow the human to control the movements of the hand exoskeleton, which is useful to teach new trajectories or can be used for diagnostic purposes. The force control scheme presented in this paper uses the force sensor values to generate a trajectory which is executed by a position control loop based on sliding mode control",
"title": ""
},
{
"docid": "5647fc18a3f5b319a2b4c16f7fea3d39",
"text": "This paper presents an abstract view of mutation analysis. Mutation was originally thought of as making changes to program source, but similar kinds of changes have been applied to other artifacts, including program specifications, XML, and input languages. This paper argues that mutation analysis is actually a way to modify any software artifact based on its syntactic description, and is in the same family of test generation methods that create inputs from syntactic descriptions. The essential characteristic of mutation is that a syntactic description such as a grammar is used to create tests. We call this abstract view grammar-based testing, and view it as an interface, which mutation analysis implements. This shift in view allows mutation to be defined in a general way, yielding three benefits. First, it provides a simpler way to understand mutation. Second, it makes it easier to develop future applications of mutation analysis, such as finite state machines and use case collaboration diagrams. The third benefit, which due to space limitations is not explored in this paper, is ensuring that existing techniques are complete according to the criteria defined here.",
"title": ""
},
{
"docid": "d1d5d161f342a30c9b811fc90df7345b",
"text": "BACKGROUND\nNosocomial infections are widespread and are important contributors to morbidity and mortality. Prevalence studies are useful in revealing the prevalence of hospital-acquired infections.\n\n\nOBJECTIVES\nTo determine the bacterial pathogens associated with hospital acquired surgical site infection (SSI) and urinary tract infection (UTI) and assess their susceptibility patterns in patients admitted in Mekelle Hospital in Ethiopia.\n\n\nMETHODS\nFrom November 2005 to April 2006 a prospective cross sectional study was conducted at Mekelle Hospital, Tigray region, North Ethiopia. The study population comprised of a total of 246 informed and consented adult patients hospitalized for surgical (n = 212) and Gynecology and Obstetrics cases (n = 34).\n\n\nRESULTS\nOf the 246 admitted patients, 68 (27.6%) developed nosocomial infections (SSI and/or nosocomial UTI) based on the clinical evaluations, and positive wound and urine culture results. Gram negative bacteria were predominantly isolated with a rate of 18/34 (53%) and 34/41 (83%) from SSI and UTI respectively. Most of the isolates from UTI have high rates of resistance (> 80%) to the commonly used antibiotics such as ampicillin, amoxicillin, chloramphenicol, gentamicin, streptomycin, and trimethoprim-sulphamethoxazole; and in isolates from SSI to amoxicillin and trimethoprim-sulphamethoxazole.\n\n\nCONCLUSIONS\nThe results showed that the prevalence of HAIs (SSI and nosocomial UTI) in the Hospital is high when compared to previous Ethiopian and other studies despite the use of prophylactic antibiotics. The pathogens causing SSI and UT7 are often resistant to commonly used antimicrobials. The findings underscore the need for an infection control system and surveillance program in the hospital and to monitor antimicrobial resistance pattern for the use of prophylactic and therapeutic antibiotics.",
"title": ""
},
{
"docid": "226607ad7be61174871fcab384ac31b4",
"text": "Traditional image stitching using parametric transforms such as homography, only produces perceptually correct composites for planar scenes or parallax free camera motion between source frames. This limits mosaicing to source images taken from the same physical location. In this paper, we introduce a smoothly varying affine stitching field which is flexible enough to handle parallax while retaining the good extrapolation and occlusion handling properties of parametric transforms. Our algorithm which jointly estimates both the stitching field and correspondence, permits the stitching of general motion source images, provided the scenes do not contain abrupt protrusions.",
"title": ""
}
] | scidocsrr |
35089915d9f374c0ceda5110b12bab24 | History of cannabis as a medicine: a review. | [
{
"docid": "3392de7e3182420e882617f0baff389a",
"text": "BACKGROUND\nIndividuals who initiate cannabis use at an early age, when the brain is still developing, might be more vulnerable to lasting neuropsychological deficits than individuals who begin use later in life.\n\n\nMETHODS\nWe analyzed neuropsychological test results from 122 long-term heavy cannabis users and 87 comparison subjects with minimal cannabis exposure, all of whom had undergone a 28-day period of abstinence from cannabis, monitored by daily or every-other-day observed urine samples. We compared early-onset cannabis users with late-onset users and with controls, using linear regression controlling for age, sex, ethnicity, and attributes of family of origin.\n\n\nRESULTS\nThe 69 early-onset users (who began smoking before age 17) differed significantly from both the 53 late-onset users (who began smoking at age 17 or later) and from the 87 controls on several measures, most notably verbal IQ (VIQ). Few differences were found between late-onset users and controls on the test battery. However, when we adjusted for VIQ, virtually all differences between early-onset users and controls on test measures ceased to be significant.\n\n\nCONCLUSIONS\nEarly-onset cannabis users exhibit poorer cognitive performance than late-onset users or control subjects, especially in VIQ, but the cause of this difference cannot be determined from our data. The difference may reflect (1). innate differences between groups in cognitive ability, antedating first cannabis use; (2). an actual neurotoxic effect of cannabis on the developing brain; or (3). poorer learning of conventional cognitive skills by young cannabis users who have eschewed academics and diverged from the mainstream culture.",
"title": ""
}
] | [
{
"docid": "ba452a03f619b7de7b37fe76bdb186e8",
"text": "Device variability is receiving a lot of interest recently due to its important impact on the design of digital integrated systems. In analog integrated circuits, the variability of identically designed devices has long been a concern since it directly affects the attainable precision. This paper reviews the mismatch device models that are widely used in analog design as well as the fundamental impact of device mismatch on the trade-off between different performance parameters.",
"title": ""
},
{
"docid": "4ae6afb7039936b2e6bcfc030fdb9cea",
"text": "Apart from being used as a means of entertainment, computer games have been adopted for a long time as a valuable tool for learning. Computer games can offer many learning benefits to students since they can consume their attention and increase their motivation and engagement which can then lead to stimulate learning. However, most of the research to date on educational computer games, in particular learning versions of existing computer games, focused only on learner with typical development. Rather less is known about designing educational games for learners with special needs. The current research presents the results of a pilot study. The principal aim of this pilot study is to examine the interest of learners with hearing impairments in using an educational game for learning the sign language notation system SignWriting. The results found indicated that, overall, the application is useful, enjoyable and easy to use: the game can stimulate the students’ interest in learning such notations.",
"title": ""
},
{
"docid": "e2ce393fade02f0dfd20b9aca25afd0f",
"text": "This paper presents a comparative lightning performance study conducted on a 275 kV double circuit shielded transmission line using two software programs, TFlash and Sigma-Slp. The line performance was investigated by using both a single stroke and a statistical performance analysis and considering cases of shielding failure and backflashover. A sensitivity analysis was carried out to determine the relationship between the flashover rate and the parameters influencing it. To improve the lightning performance of the line, metal oxide surge arresters were introduced using different phase and line locations. Optimised arrester arrangements are proposed.",
"title": ""
},
{
"docid": "39549cfe16eec5d4b083bf6a05c3d29f",
"text": "Recently, there has been increasing interest in learning semantic parsers with indirect supervision, but existing work focuses almost exclusively on question answering. Separately, there have been active pursuits in leveraging databases for distant supervision in information extraction, yet such methods are often limited to binary relations and none can handle nested events. In this paper, we generalize distant supervision to complex knowledge extraction, by proposing the first approach to learn a semantic parser for extracting nested event structures without annotated examples, using only a database of such complex events and unannotated text. The key idea is to model the annotations as latent variables, and incorporate a prior that favors semantic parses containing known events. Experiments on the GENIA event extraction dataset show that our approach can learn from and extract complex biological pathway events. Moreover, when supplied with just five example words per event type, it becomes competitive even among supervised systems, outperforming 19 out of 24 teams that participated in the original shared task.",
"title": ""
},
{
"docid": "b5788c52127d2ef06df428d758f1a225",
"text": "Conventional convolutional neural networks use either a linear or a nonlinear filter to extract features from an image patch (region) of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula> (typically, <inline-formula> <tex-math notation=\"LaTeX\">$ H $ </tex-math></inline-formula> is small and is equal to <inline-formula> <tex-math notation=\"LaTeX\">$ W$ </tex-math></inline-formula>, e.g., <inline-formula> <tex-math notation=\"LaTeX\">$ H $ </tex-math></inline-formula> is 5 or 7). Generally, the size of the filter is equal to the size <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula> of the input patch. We argue that the representational ability of equal-size strategy is not strong enough. To overcome the drawback, we propose to use subpatch filter whose spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ h\\times w $ </tex-math></inline-formula> is smaller than <inline-formula> <tex-math notation=\"LaTeX\">$ H\\times W $ </tex-math></inline-formula>. The proposed subpatch filter consists of two subsequent filters. The first one is a linear filter of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ h\\times w $ </tex-math></inline-formula> and is aimed at extracting features from spatial domain. The second one is of spatial size <inline-formula> <tex-math notation=\"LaTeX\">$ 1\\times 1 $ </tex-math></inline-formula> and is used for strengthening the connection between different input feature channels and for reducing the number of parameters. The subpatch filter convolves with the input patch and the resulting network is called a subpatch network. Taking the output of one subpatch network as input, we further repeat constructing subpatch networks until the output contains only one neuron in spatial domain. These subpatch networks form a new network called the cascaded subpatch network (CSNet). The feature layer generated by CSNet is called the <italic>csconv</italic> layer. For the whole input image, we construct a deep neural network by stacking a sequence of <italic>csconv</italic> layers. Experimental results on five benchmark data sets demonstrate the effectiveness and compactness of the proposed CSNet. For example, our CSNet reaches a test error of 5.68% on the CIFAR10 data set without model averaging. To the best of our knowledge, this is the best result ever obtained on the CIFAR10 data set.",
"title": ""
},
{
"docid": "2bd15d743690c8bcacb0d01650759d62",
"text": "With the large amount of available data and the variety of features they offer, electronic health records (EHR) have gotten a lot of interest over recent years, and start to be widely used by the machine learning and bioinformatics communities. While typical numerical fields such as demographics, vitals, lab measurements, diagnoses and procedures, are natural to use in machine learning models, there is no consensus yet on how to use the free-text clinical notes. We show how embeddings can be learned from patients’ history of notes, at the word, note and patient level, using simple neural and sequence models. We show on various relevant evaluation tasks that these embeddings are easily transferable to smaller problems, where they enable accurate predictions using only clinical notes.",
"title": ""
},
{
"docid": "cc9ee1b5111974da999d8c52ba393856",
"text": "The back propagation (BP) neural network algorithm is a multi-layer feedforward network trained according to error back propagation algorithm and is one of the most widely applied neural network models. BP network can be used to learn and store a great deal of mapping relations of input-output model, and no need to disclose in advance the mathematical equation that describes these mapping relations. Its learning rule is to adopt the steepest descent method in which the back propagation is used to regulate the weight value and threshold value of the network to achieve the minimum error sum of square. This paper focuses on the analysis of the characteristics and mathematical theory of BP neural network and also points out the shortcomings of BP algorithm as well as several methods for improvement.",
"title": ""
},
{
"docid": "8c2b0e93eae23235335deacade9660f0",
"text": "We design and implement a simple zero-knowledge argument protocol for NP whose communication complexity is proportional to the square-root of the verification circuit size. The protocol can be based on any collision-resistant hash function. Alternatively, it can be made non-interactive in the random oracle model, yielding concretely efficient zk-SNARKs that do not require a trusted setup or public-key cryptography.\n Our protocol is attractive not only for very large verification circuits but also for moderately large circuits that arise in applications. For instance, for verifying a SHA-256 preimage in zero-knowledge with 2-40 soundness error, the communication complexity is roughly 44KB (or less than 34KB under a plausible conjecture), the prover running time is 140 ms, and the verifier running time is 62 ms. This proof is roughly 4 times shorter than a similar proof of ZKB++ (Chase et al., CCS 2017), an optimized variant of ZKBoo (Giacomelli et al., USENIX 2016).\n The communication complexity of our protocol is independent of the circuit structure and depends only on the number of gates. For 2-40 soundness error, the communication becomes smaller than the circuit size for circuits containing roughly 3 million gates or more. Our efficiency advantages become even bigger in an amortized setting, where several instances need to be proven simultaneously.\n Our zero-knowledge protocol is obtained by applying an optimized version of the general transformation of Ishai et al. (STOC 2007) to a variant of the protocol for secure multiparty computation of Damgard and Ishai (Crypto 2006). It can be viewed as a simple zero-knowledge interactive PCP based on \"interleaved\" Reed-Solomon codes.",
"title": ""
},
{
"docid": "f4963c41832024b8cd7d3480204275fa",
"text": "Almost surreptitiously, crowdsourcing has entered software engineering practice. In-house development, contracting, and outsourcing still dominate, but many development projects use crowdsourcing-for example, to squash bugs, test software, or gather alternative UI designs. Although the overall impact has been mundane so far, crowdsourcing could lead to fundamental, disruptive changes in how software is developed. Various crowdsourcing models have been applied to software development. Such changes offer exciting opportunities, but several challenges must be met for crowdsourcing software development to reach its potential.",
"title": ""
},
{
"docid": "1420ad48fdba30ac37b176007c3945fa",
"text": "Accurate and fast foreground object extraction is very important for object tracking and recognition in video surveillance. Although many background subtraction (BGS) methods have been proposed in the recent past, it is still regarded as a tough problem due to the variety of challenging situations that occur in real-world scenarios. In this paper, we explore this problem from a new perspective and propose a novel background subtraction framework with real-time semantic segmentation (RTSS). Our proposed framework consists of two components, a traditional BGS segmenter B and a real-time semantic segmenter S. The BGS segmenter B aims to construct background models and segments foreground objects. The realtime semantic segmenter S is used to refine the foreground segmentation outputs as feedbacks for improving the model updating accuracy. B and S work in parallel on two threads. For each input frame It, the BGS segmenter B computes a preliminary foreground/background (FG/BG) mask Bt. At the same time, the real-time semantic segmenter S extracts the object-level semantics St. Then, some specific rules are applied on Bt and St to generate the final detection Dt. Finally, the refined FG/BG mask Dt is fed back to update the background model. Comprehensive experiments evaluated on the CDnet 2014 dataset demonstrate that our proposed method achieves stateof-the-art performance among all unsupervised background subtraction methods while operating at real-time, and even performs better than some deep learning based supervised algorithms. In addition, our proposed framework is very flexible and has the potential for generalization.",
"title": ""
},
{
"docid": "ae497143f2c1b15623ab35b360d954e5",
"text": "With the popularity of social media (e.g., Facebook and Flicker), users could easily share their check-in records and photos during their trips. In view of the huge amount of check-in data and photos in social media, we intend to discover travel experiences to facilitate trip planning. Prior works have been elaborated on mining and ranking existing travel routes from check-in data. We observe that when planning a trip, users may have some keywords about preference on his/her trips. Moreover, a diverse set of travel routes is needed. To provide a diverse set of travel routes, we claim that more features of Places of Interests (POIs) should be extracted. Therefore, in this paper, we propose a Keyword-aware Skyline Travel Route (KSTR) framework that use knowledge extraction from historical mobility records and the user's social interactions. Explicitly, we model the \"Where, When, Who\" issues by featurizing the geographical mobility pattern, temporal influence and social influence. Then we propose a keyword extraction module to classify the POI-related tags automatically into different types, for effective matching with query keywords. We further design a route reconstruction algorithm to construct route candidates that fulfill the query inputs. To provide diverse query results, we explore Skyline concepts to rank routes. To evaluate the effectiveness and efficiency of the proposed algorithms, we have conducted extensive experiments on real location-based social network datasets, and the experimental results show that KSTR does indeed demonstrate good performance compared to state-of-the-art works.",
"title": ""
},
{
"docid": "b1f000790b6ff45bd9b0b7ba3aec9cb2",
"text": "Broad-scale destruction and fragmentation of native vegetation is a highly visible result of human land-use throughout the world (Chapter 4). From the Atlantic Forests of South America to the tropical forests of Southeast Asia, and in many other regions on Earth, much of the original vegetation now remains only as fragments amidst expanses of land committed to feeding and housing human beings. Destruction and fragmentation of habitats are major factors in the global decline of populations and species (Chapter 10), the modification of native plant and animal communities and the alteration of ecosystem processes (Chapter 3). Dealing with these changes is among the greatest challenges facing the “mission-orientated crisis discipline” of conservation biology (Soulé 1986; see Chapter 1). Habitat fragmentation, by definition, is the “breaking apart” of continuous habitat, such as tropical forest or semi-arid shrubland, into distinct pieces. When this occurs, three interrelated processes take place: a reduction in the total amount of the original vegetation (i.e. habitat loss); subdivision of the remaining vegetation into fragments, remnants or patches (i.e. habitat fragmentation); and introduction of new forms of land-use to replace vegetation that is lost. These three processes are closely intertwined such that it is often difficult to separate the relative effect of each on the species or community of concern. Indeed, many studies have not distinguished between these components, leading to concerns that “habitat fragmentation” is an ambiguous, or even meaningless, concept (Lindenmayer and Fischer 2006). Consequently, we use “landscape change” to refer to these combined processes and “habitat fragmentation” for issues directly associated with the subdivision of vegetation and its ecological consequences. This chapter begins by summarizing the conceptual approaches used to understand conservation in fragmented landscapes. We then examine the biophysical aspects of landscape change, and how such change affects species and communities, posing two main questions: (i) what are the implications for the patterns of occurrence of species and communities?; and (ii) how does landscape change affect processes that influence the distribution and viability of species and communities? The chapter concludes by identifying the kinds of actions that will enhance the conservation of biota in fragmented landscapes.",
"title": ""
},
{
"docid": "52e1c954aefca110d15c24d90de902b2",
"text": "Reinforcement learning (RL) agents can benefit from adaptive exploration/exploitation behavior, especially in dynamic environments. We focus on regulating this exploration/exploitation behavior by controlling the action-selection mechanism of RL. Inspired by psychological studies which show that affect influences human decision making, we use artificial affect to influence an agent’s action-selection. Two existing affective strategies are implemented and, in addition, a new hybrid method that combines both. These strategies are tested on ‘maze tasks’ in which a RL agent has to find food (rewarded location) in a maze. We use Soar-RL, the new RL-enabled version of Soar, as a model environment. One task tests the ability to quickly adapt to an environmental change, while the other tests the ability to escape a local optimum in order to find the global optimum. We show that artificial affect-controlled action-selection in some cases helps agents to faster adapt to changes in the environment.",
"title": ""
},
{
"docid": "73bbb7122b588761f1bf7b711f21a701",
"text": "This research attempts to find a new closed-form solution of toroid and overlapping windings for axial flux permanent magnet machines. The proposed solution includes analytical derivations for winding lengths, resistances, and inductances as functions of fundamental airgap flux density and inner-to-outer diameter ratio. Furthermore, phase back-EMFs, phase terminal voltages, and efficiencies are calculated and compared for both winding types. Finite element analysis is used to validate the accuracy of the proposed analytical calculations. The proposed solution should assist machine designers to ascertain benefits and limitations of toroid and overlapping winding types as well as to get faster results.",
"title": ""
},
{
"docid": "611eacd767f1ea709c1c4aca7acdfcdb",
"text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.",
"title": ""
},
{
"docid": "abdd0d2c13c884b22075b2c3f54a0dfc",
"text": "Global clock distribution for multi-GHz microprocessors has become increasingly difficult and time-consuming to design. As the frequency of the global clock continues to increase, the timing uncertainty introduced by the clock network − the skew and jitter − must reduce proportional to the clock period. However, the clock skew and jitter for conventional, buffered H-trees are proportional to latency, which has increased for recent generations of microprocessors. A global clock network that uses standing waves and coupled oscillators has the potential to significantly reduce both skew and jitter. Standing waves have the unique property that phase does not depend on position, meaning that there is ideally no skew. They have previously been used for board-level clock distribution, on coaxial cables, and on superconductive wires but have never been implemented on-chip due to the large losses of on-chip interconnects. Networks of coupled oscillators have a phase-averaging effect that reduces both skew and jitter. However, none of the previous implementations of coupled-oscillator clock networks use standing waves and some require considerable circuitry to couple the oscillators. In this thesis, a global clock network that incorporates standing waves and coupled oscillators to distribute a high-frequency clock signal with low skew and low jitter is",
"title": ""
},
{
"docid": "a0f46c67118b2efec2bce2ecd96d11d6",
"text": "This paper describes the implementation of a service to identify and geo-locate real world events that may be present as social activity signals in two different social networks. Specifically, we focus on content shared by users on Twitter and Instagram in order to design a system capable of fusing data across multiple networks. Past work has demonstrated that it is indeed possible to detect physical events using various social network platforms. However, many of these signals need corroboration in order to handle events that lack proper support within a single network. We leverage this insight to design an unsupervised approach that can correlate event signals across multiple social networks. Our algorithm can detect events and identify the location of the event occurrence. We evaluate our algorithm using both simulations and real world datasets collected using Twitter and Instagram. The results indicate that our algorithm significantly improves false positive elimination and attains high precision compared to baseline methods on real world datasets.",
"title": ""
},
{
"docid": "03ec20a448dc861d8ba8b89b0963d52d",
"text": "Social Web 2.0 features have become a vital component in a variety of multimedia systems, e.g., YouTube and Last.fm. Interestingly, adult video websites are also starting to adopt these Web 2.0 principles, giving rise to the term “Porn 2.0”. This paper examines a large Porn 2.0 social network, through data covering 563k users. We explore a number of unusual behavioural aspects that set this apart from more traditional multimedia social networks. We particularly focus on the role of gender and sexuality, to understand how these different groups behave. A number of key differences are discovered relating to social demographics, modalities of interaction and content consumption habits, shedding light on this understudied area of online activity.",
"title": ""
},
{
"docid": "96a38b8b6286169cdd98aa6778456e0c",
"text": "Data mining is on the interface of Computer Science andStatistics, utilizing advances in both disciplines to make progressin extracting information from large databases. It is an emergingfield that has attracted much attention in a very short period oftime. This article highlights some statistical themes and lessonsthat are directly relevant to data mining and attempts to identifyopportunities where close cooperation between the statistical andcomputational communities might reasonably provide synergy forfurther progress in data analysis.",
"title": ""
},
{
"docid": "25d25da610b4b3fe54b665d55afc3323",
"text": "We address the problem of vision-based navigation in busy inner-city locations, using a stereo rig mounted on a mobile platform. In this scenario semantic information becomes important: rather than modelling moving objects as arbitrary obstacles, they should be categorised and tracked in order to predict their future behaviour. To this end, we combine classical geometric world mapping with object category detection and tracking. Object-category specific detectors serve to find instances of the most important object classes (in our case pedestrians and cars). Based on these detections, multi-object tracking recovers the objects’ trajectories, thereby making it possible to predict their future locations, and to employ dynamic path planning. The approach is evaluated on challenging, realistic video sequences recorded at busy inner-city locations.",
"title": ""
}
] | scidocsrr |
016bf355adcc396c31dacc83da145b0e | Personality as a predictor of Business Social Media Usage: an Empirical Investigation of Xing Usage Patterns | [
{
"docid": "627b14801c8728adf02b75e8eb62896f",
"text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.",
"title": ""
},
{
"docid": "ee6d70f4287f1b43e1c36eba5f189523",
"text": "Received: 10 March 2008 Revised: 31 May 2008 2nd Revision: 27 July 2008 Accepted: 11 August 2008 Abstract For more than a century, concern for privacy (CFP) has co-evolved with advances in information technology. The CFP refers to the anxious sense of interest that a person has because of various types of threats to the person’s state of being free from intrusion. Research studies have validated this concept and identified its consequences. For example, research has shown that the CFP can have a negative influence on the adoption of information technology; but little is known about factors likely to influence such concern. This paper attempts to fill that gap. Because privacy is said to be a part of a more general ‘right to one’s personality’, we consider the so-called ‘Big Five’ personality traits (agreeableness, extraversion, emotional stability, openness to experience, and conscientiousness) as factors that can influence privacy concerns. Protection motivation theory helps us to explain this influence in the context of an emerging pervasive technology: location-based services. Using a survey-based approach, we find that agreeableness, conscientiousness, and openness to experience each affect the CFP. These results have implications for the adoption, the design, and the marketing of highly personalized new technologies. European Journal of Information Systems (2008) 17, 387–402. doi:10.1057/ejis.2008.29",
"title": ""
},
{
"docid": "5a5fbde8e0e264410fe23322a9070a39",
"text": "By asking users of career-oriented social networking sites I investigated their job search behavior. For further IS-theorizing I integrated the number of a user's contacts as an own construct into Venkatesh's et al. UTAUT2 model, which substantially rose its predictive quality from 19.0 percent to 80.5 percent concerning the variance of job search success. Besides other interesting results I found a substantial negative relationship between the number of contacts and job search success, which supports the experience of practitioners but contradicts scholarly findings. The results are useful for scholars and practitioners.",
"title": ""
}
] | [
{
"docid": "a4f2a82daf86314363ceeac34cba7ed9",
"text": "As a vital task in natural language processing, relation classification aims to identify relation types between entities from texts. In this paper, we propose a novel Att-RCNN model to extract text features and classify relations by combining recurrent neural network (RNN) and convolutional neural network (CNN). This network structure utilizes RNN to extract higher level contextual representations of words and CNN to obtain sentence features for the relation classification task. In addition to this network structure, both word-level and sentence-level attention mechanisms are employed in Att-RCNN to strengthen critical words and features to promote the model performance. Moreover, we conduct experiments on four distinct datasets: SemEval-2010 task 8, SemEval-2018 task 7 (two subtask datasets), and KBP37 dataset. Compared with the previous public models, Att-RCNN has the overall best performance and achieves the highest $F_{1}$ score, especially on the KBP37 dataset.",
"title": ""
},
{
"docid": "fcf2fd920ac463e505e68aa02baef795",
"text": "Channel modeling is a critical topic when considering designing, learning, or evaluating the performance of any communications system. Most prior work in designing or learning new modulation schemes has focused on using highly simplified analytic channel models such as additive white Gaussian noise (AWGN), Rayleigh fading channels or similar. Recently, we proposed the usage of a generative adversarial networks (GANs) to jointly approximate a wireless channel response model (e.g. from real black box measurements) and optimize for an efficient modulation scheme over it using machine learning. This approach worked to some degree, but was unable to produce accurate probability distribution functions (PDFs) representing the stochastic channel response. In this paper, we focus specifically on the problem of accurately learning a channel PDF using a variational GAN, introducing an architecture and loss function which can accurately capture stochastic behavior. We illustrate where our prior method failed and share results capturing the performance of such as system over a range of realistic channel distributions.",
"title": ""
},
{
"docid": "681b46b159c7b5df2b1bf99e9f0064fd",
"text": "Purpose – The purpose of this paper is to examine the factors within the technology-organization-environment (TOE) framework that affect the decision to adopt electronic commerce (EC) and extent of EC adoption, as well as adoption and non-adoption of different EC applications within smalland medium-sized enterprises (SMEs). Design/methodology/approach – A questionnaire-based survey was conducted to collect data from 235 managers or owners of manufacturing SMEs in Iran. The data were analyzed by employing factorial analysis and relevant hypotheses were derived and tested by multiple and logistic regression analysis. Findings – EC adoption within SMEs is affected by perceived relative advantage, perceived compatibility, CEO’s innovativeness, information intensity, buyer/supplier pressure, support from technology vendors, and competition. Similarly, description on determinants of adoption and non-adoption of different EC applications has been provided. Research limitations/implications – Cross-sectional data of this research tend to have certain limitations when it comes to explaining the direction of causality of the relationships among the variables, which will change overtime. Practical implications – The findings offer valuable insights to managers, IS experts, and policy makers responsible for assisting SMEs with entering into the e-marketplace. Vendors should collaborate with SMEs to enhance the compatibility of EC applications with these businesses. To enhance the receptiveness of EC applications, CEOs, innovativeness and perception toward EC advantages should also be aggrandized. Originality/value – This study is perhaps one of the first to use a wide range of variables in the light of TOE framework to comprehensively assess EC adoption behavior, both in terms of initial and post-adoption within SMEs in developing countries, as well adoption and non-adoption of simple and advanced EC applications such as electronic supply chain management systems.",
"title": ""
},
{
"docid": "fdd790d33300c19cb0c340903e503b02",
"text": "We present a simple method for evergrowing extraction of predicate paraphrases from news headlines in Twitter. Analysis of the output of ten weeks of collection shows that the accuracy of paraphrases with different support levels is estimated between 60-86%. We also demonstrate that our resource is to a large extent complementary to existing resources, providing many novel paraphrases. Our resource is publicly available, continuously expanding based on daily news.",
"title": ""
},
{
"docid": "8a35d871317a372445a5f25eb7610e77",
"text": "Wireless Sensor Networks (WSNs) have their own unique nature of distributed resources and dynamic topology. This introduces very special requirements that should be met by the proposed routing protocols for the WSNs. A Wireless Sensor Network routing protocol is a standard which controls the number of nodes that come to an agreement about the way to route packets between all the computing devices in mobile wireless networks. Today, wireless networks are becoming popular and many routing protocols have been proposed in the literature. Considering these protocols we made a survey on the WSNs energy-efficient routing techniques which are used for Health Care Communication Systems concerning especially the Flat Networks Protocols that have been developed in recent years. Then, as related work, we discuss each of the routing protocols belonging to this category and conclude with a comparison of them.",
"title": ""
},
{
"docid": "6e993c4f537dfb8c73980dd56aead6d8",
"text": "A novel compact 4 × 4 Butler matrix using only microstrip couplers and a crossover is proposed in this letter. Compared with the conventional Butler matrix, the proposed one avoids the interconnecting mismatch loss and imbalanced amplitude introduced by the phase shifter. The measurements show accurate phase differences of 45±0.8° and -135±0.9° with an amplitude imbalance less than 0.4 dB. The 10 dB return loss bandwidth is 20.1%.",
"title": ""
},
{
"docid": "ffef3f247f0821eee02b8d8795ddb21c",
"text": "A broadband polarization reconfigurable rectenna is proposed, which can operate in three polarization modes. The receiving antenna of the rectenna is a polarization reconfigurable planar monopole antenna. By installing switches on the feeding network, the antenna can switch to receive electromagnetic (EM) waves with different polarizations, including linear polarization (LP), right-hand and left-hand circular polarizations (RHCP/LHCP). To achieve stable conversion efficiency of the rectenna (nr) in all the modes within a wide frequency band, a tunable matching network is inserted between the rectifying circuit and the antenna. The measured nr changes from 23.8% to 31.9% in the LP mode within 5.1-5.8 GHz and from 22.7% to 24.5% in the CP modes over 5.8-6 GHz. Compared to rectennas with conventional broadband matching network, the proposed rectenna exhibits more stable conversion efficiency.",
"title": ""
},
{
"docid": "f26d34a762ce2c8ffd1c92ec0a86d56a",
"text": "Despite recent interest in digital fabrication, there are still few algorithms that provide control over how light propagates inside a solid object. Existing methods either work only on the surface or restrict themselves to light diffusion in volumes. We use multi-material 3D printing to fabricate objects with embedded optical fibers, exploiting total internal reflection to guide light inside an object. We introduce automatic fiber design algorithms together with new manufacturing techniques to route light between two arbitrary surfaces. Our implicit algorithm optimizes light transmission by minimizing fiber curvature and maximizing fiber separation while respecting constraints such as fiber arrival angle. We also discuss the influence of different printable materials and fiber geometry on light propagation in the volume and the light angular distribution when exiting the fiber. Our methods enable new applications such as surface displays of arbitrary shape, touch-based painting of surfaces, and sensing a hemispherical light distribution in a single shot.",
"title": ""
},
{
"docid": "441a6a879e0723c00f48796fd4bb1a91",
"text": "Recent research on Low Power Wide Area Network (LPWAN) technologies which provide the capability of serving massive low power devices simultaneously has been very attractive. The LoRaWAN standard is one of the most successful developments. Commercial pilots are seen in many countries around the world. However, the feasibility of large scale deployments, for example, for smart city applications need to be further investigated. This paper provides a comprehensive case study of LoRaWAN to show the feasibility, scalability, and reliability of LoRaWAN in realistic simulated scenarios, from both technical and economic perspectives. We develop a Matlab based LoRaWAN simulator to offer a software approach of performance evaluation. A practical LoRaWAN network covering Greater London area is implemented. Its performance is evaluated based on two typical city monitoring applications. We further present an economic analysis and develop business models for such networks, in order to provide a guideline for commercial network operators, IoT vendors, and city planners to investigate future deployments of LoRaWAN for smart city applications.",
"title": ""
},
{
"docid": "bee35be37795d274dfbb185036fb8ae9",
"text": "This paper presents a human--machine interface to control exoskeletons that utilizes electrical signals from the muscles of the operator as the main means of information transportation. These signals are recorded with electrodes attached to the skin on top of selected muscles and reflect the activation of the observed muscle. They are evaluated by a sophisticated but simplified biomechanical model of the human body to derive the desired action of the operator. A support action is computed in accordance to the desired action and is executed by the exoskeleton. The biomechanical model fuses results from different biomechanical and biomedical research groups and performs a sensible simplification considering the intended application. Some of the model parameters reflect properties of the individual human operator and his or her current body state. A calibration algorithm for these parameters is presented that relies exclusively on sensors mounted on the exoskeleton. An exoskeleton for knee joint support was designed and constructed to verify the model and to investigate the interaction between operator and machine in experiments with force support during everyday movements.",
"title": ""
},
{
"docid": "631b6c1bce729a25c02f499464df7a4f",
"text": "Natural language artifacts, such as requirements specifications, often explicitly state the security requirements for software systems. However, these artifacts may also imply additional security requirements that developers may overlook but should consider to strengthen the overall security of the system. The goal of this research is to aid requirements engineers in producing a more comprehensive and classified set of security requirements by (1) automatically identifying security-relevant sentences in natural language requirements artifacts, and (2) providing context-specific security requirements templates to help translate the security-relevant sentences into functional security requirements. Using machine learning techniques, we have developed a tool-assisted process that takes as input a set of natural language artifacts. Our process automatically identifies security-relevant sentences in the artifacts and classifies them according to the security objectives, either explicitly stated or implied by the sentences. We classified 10,963 sentences in six different documents from healthcare domain and extracted corresponding security objectives. Our manual analysis showed that 46% of the sentences were security-relevant. Of these, 28% explicitly mention security while 72% of the sentences are functional requirements with security implications. Using our tool, we correctly predict and classify 82% of the security objectives for all the sentences (precision). We identify 79% of all security objectives implied by the sentences within the documents (recall). Based on our analysis, we develop context-specific templates that can be instantiated into a set of functional security requirements by filling in key information from security-relevant sentences.",
"title": ""
},
{
"docid": "5dad2c804c4718b87ae6ee9d7cc5a054",
"text": "The masquerade attack, where an attacker takes on the identity of a legitimate user to maliciously utilize that user’s privileges, poses a serious threat to the security of information systems. Such attacks completely undermine traditional security mechanisms due to the trust imparted to user accounts once they have been authenticated. Many attempts have been made at detecting these attacks, yet achieving high levels of accuracy remains an open challenge. In this paper, we discuss the use of a specially tuned sequence alignment algorithm, typically used in bioinformatics, to detect instances of masquerading in sequences of computer audit data. By using the alignment algorithm to align sequences of monitored audit data with sequences known to have been produced by the user, the alignment algorithm can discover areas of similarity and derive a metric that indicates the presence or absence of masquerade attacks. Additionally, we present several scoring systems, methods for accommodating variations in user behavior, and heuristics for decreasing the computational requirements of the algorithm. Our technique is evaluated against the standard masquerade detection dataset provided by Schonlau et al. [14, 13], and the results show that the use of the sequence alignment technique provides, to our knowledge, the best results of all masquerade detection techniques to date.",
"title": ""
},
{
"docid": "3157970218dc3761576345c0e01e3121",
"text": "This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select and execute among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu",
"title": ""
},
{
"docid": "e79abaaa50d8ab8938f1839c7e4067f9",
"text": "We review the objectives and techniques used in the control of horizontal axis wind turbines at the individual turbine level, where controls are applied to the turbine blade pitch and generator. The turbine system is modeled as a flexible structure operating in the presence of turbulent wind disturbances. Some overview of the various stages of turbine operation and control strategies used to maximize energy capture in below rated wind speeds is given, but emphasis is on control to alleviate loads when the turbine is operating at maximum power. After reviewing basic turbine control objectives, we provide an overview of the common basic linear control approaches and then describe more advanced control architectures and why they may provide significant advantages.",
"title": ""
},
{
"docid": "c99f6ba5851e497206d444d0780a3ef0",
"text": "Digital backchannel systems have been proven useful to help a lecturer gather real-time online feedback from students in a lecture environment. However, the large number of posts made during a lecture creates a major hurdle for the lecturer to promptly analyse them and take actions accordingly in time. To tackle this problem, we propose a solution that analyses the sentiment of students' feedback and visualises the morale trend of the student population to the lecturer in real time. In this paper, we present the user interface for morale visualisation and playback of ranked posts as well as the techniques for sentiment analysis and morale computation.",
"title": ""
},
{
"docid": "a10b0a69ba7d3f902590b35cf0d5ea32",
"text": "This article distills insights from historical, sociological, and psychological perspectives on marriage to develop the suffocation model of marriage in America. According to this model, contemporary Americans are asking their marriage to help them fulfill different sets of goals than in the past. Whereas they ask their marriage to help them fulfill their physiological and safety needs much less than in the past, they ask it to help them fulfill their esteem and self-actualization needs much more than in the past. Asking the marriage to help them fulfill the latter, higher level needs typically requires sufficient investment of time and psychological resources to ensure that the two spouses develop a deep bond and profound insight into each other’s essential qualities. Although some spouses are investing sufficient resources—and reaping the marital and psychological benefits of doing so—most are not. Indeed, they are, on average, investing less than in the past. As a result, mean levels of marital quality and personal well-being are declining over time. According to the suffocation model, spouses who are struggling with an imbalance between what they are asking from their marriage and what they are investing in it have several promising options for corrective action: intervening to optimize their available resources, increasing their investment of resources in the marriage, and asking less of the marriage in terms of facilitating the fulfillment of spouses’ higher needs. Discussion explores the implications of the suffocation model for understanding dating and courtship, sociodemographic variation, and marriage beyond American’s borders.",
"title": ""
},
{
"docid": "72bbd468c00ae45979cce3b771e4c2bf",
"text": "Twitter is a popular microblogging and social networking service with over 100 million users. Users create short messages pertaining to a wide variety of topics. Certain topics are highlighted by Twitter as the most popular and are known as “trending topics.” In this paper, we will outline methodologies of detecting and identifying trending topics from streaming data. Data from Twitter’s streaming API will be collected and put into documents of equal duration. Data collection procedures will allow for analysis over multiple timespans, including those not currently associated with Twitter-identified trending topics. Term frequency-inverse document frequency analysis and relative normalized term frequency analysis are performed on the documents to identify the trending topics. Relative normalized term frequency analysis identifies unigrams, bigrams, and trigrams as trending topics, while term frequcny-inverse document frequency analysis identifies unigrams as trending topics.",
"title": ""
},
{
"docid": "753c52924fadee65697f09d00b4bb187",
"text": "Although labelled graphical, many modelling languages represent important model parts as structured text. We benefit from sophisticated text editors when we use programming languages, but we neglect the same technology when we edit the textual parts of graphical models. Recent advances in generative engineering of textual model editors make the development of such sophisticated text editors practical, even for the smallest textual constructs of graphical languages. In this paper, we present techniques to embed textual model editors into graphical model editors and prove our approach for EMF-based textual editors and graphical editors created with GMF.",
"title": ""
},
{
"docid": "86e0c7b70de40fcd5179bf3ab67bc3a4",
"text": "The development of a scale to assess drug and other treatment effects on severely mentally retarded individuals was described. In the first stage of the project, an initial scale encompassing a large number of behavior problems was used to rate 418 residents. The scale was then reduced to an intermediate version, and in the second stage, 509 moderately to profoundly retarded individuals were rated. Separate factor analyses of the data from the two samples resulted in a five-factor scale comprising 58 items. The factors of the Aberrant Behavior Checklist have been labeled as follows: (I) Irritability, Agitation, Crying; (II) Lethargy, Social Withdrawal; (III) Stereotypic Behavior; (IV) Hyperactivity, Noncompliance; and (V) Inappropriate Speech. Average subscale scores were presented for the instrument, and the results were compared with empirically derived rating scales of childhood psychopathology and with factor analytic work in the field of mental retardation.",
"title": ""
}
] | scidocsrr |
74a11b3a1d2219bd9c69465f6b9f0d6a | Client Clustering for Hiring Modeling in Work Marketplaces | [
{
"docid": "de7d29c7e11445e836bd04c003443c67",
"text": "Logistic regression with `1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interior-point method for solving large-scale `1-regularized logistic regression problems. Small problems with up to a thousand or so features and examples can be solved in seconds on a PC; medium sized problems, with tens of thousands of features and examples, can be solved in tens of seconds (assuming some sparsity in the data). A variation on the basic method, that uses a preconditioned conjugate gradient method to compute the search step, can solve very large problems, with a million features and examples (e.g., the 20 Newsgroups data set), in a few minutes, on a PC. Using warm-start techniques, a good approximation of the entire regularization path can be computed much more efficiently than by solving a family of problems independently.",
"title": ""
}
] | [
{
"docid": "4c2248db49a810d727eac378cf9e3c0f",
"text": "Based on Life Cycle Assessment (LCA) and Eco-indicator 99 method, a LCA model was applied to conduct environmental impact and end-of-life treatment policy analysis for secondary batteries. This model evaluated the cycle, recycle and waste treatment stages of secondary batteries. Nickel-Metal Hydride (Ni-MH) batteries and Lithium ion (Li-ion) batteries were chosen as the typical secondary batteries in this study. Through this research, the following results were found: (1) A basic number of cycles should be defined. A minimum cycle number of 200 would result in an obvious decline of environmental loads for both battery types. Batteries with high energy density and long life expectancy have small environmental loads. Products and technology that help increase energy density and life expectancy should be encouraged. (2) Secondary batteries should be sorted out from municipal garbage. Meanwhile, different types of discarded batteries should be treated separately under policies and regulations. (3) The incineration rate has obvious impact on the Eco-indicator points of Nickel-Metal Hydride (Ni-MH) batteries. The influence of recycle rate on Lithium ion (Li-ion) batteries is more obvious. These findings indicate that recycling is the most promising direction for reducing secondary batteries' environmental loads. The model proposed here can be used to evaluate environmental loads of other secondary batteries and it can be useful for proposing policies and countermeasures to reduce the environmental impact of secondary batteries.",
"title": ""
},
{
"docid": "cd0c1507c1187e686c7641388413d3b5",
"text": "Inference of three-dimensional motion from the fusion of inertial and visual sensory data has to contend with the preponderance of outliers in the latter. Robust filtering deals with the joint inference and classification task of selecting which data fits the model, and estimating its state. We derive the optimal discriminant and propose several approximations, some used in the literature, others new. We compare them analytically, by pointing to the assumptions underlying their approximations, and empirically. We show that the best performing method improves the performance of state-of-the-art visual-inertial sensor fusion systems, while retaining the same computational complexity.",
"title": ""
},
{
"docid": "a1bd6742011302d35527cdbad73a82a3",
"text": "The Semantic Web contains an enormous amount of information in the form of knowledge bases (KB). To make this information available, many question answering (QA) systems over KBs were created in the last years. Building a QA system over KBs is difficult because there are many different challenges to be solved. In order to address these challenges, QA systems generally combine techniques from natural language processing, information retrieval, machine learning and Semantic Web. The aim of this survey is to give an overview of the techniques used in current QA systems over KBs. We present the techniques used by the QA systems which were evaluated on a popular series of benchmarks: Question Answering over Linked Data. Techniques that solve the same task are first grouped together and then described. The advantages and disadvantages are discussed for each technique. This allows a direct comparison of similar techniques. Additionally, we point to techniques that are used over WebQuestions and SimpleQuestions, which are two other popular benchmarks for QA systems.",
"title": ""
},
{
"docid": "9ecf815bfb76760f2166240aee3a6f24",
"text": "This paper reviews the current state of power supply technology platforms and highlights future trends and challenges toward realizing fully monolithic power converters. This paper presents a detailed survey of relevant power converter technologies, namely power supply in package and power supply on chip (PwrSoC). The performance of different power converter solutions reported in the literature is benchmarked against existing commercial products. This paper presents a detailed review of integrated magnetics technologies, primarily microinductors, a key component in realizing a monolithic power converter. A detailed review and comparison of different microinductor structures and the magnetic materials used as inductor cores is presented. The deposition techniques for integrating the magnetic materials in the microinductor structures are discussed. This paper proposes the use of two performance metrics or figures of merit in order to compare the dc and ac performance of individual microinductor structures. Finally, the authors discuss future trends, key challenges, and potential solutions in the realization of the “holy grail” of monolithically integrated power supplies (PwrSoC).",
"title": ""
},
{
"docid": "cc1b8f1689c45c53e461dc268c664f53",
"text": "This paper presents a one switch silicon carbide JFET normally-ON resonant inverter applied to induction heating for consumer home cookers. The promising characteristics of silicon carbide (SiC) devices need to be verified in practical applications; therefore, the objective of this work is to compare Si IGBTs and normally-ON commercially available JFET in similar operating conditions, with two similar boards. The paper describes the gate circuit implemented, the design of the basic converter in ideal operation, namely Zero Voltage Switching (ZVS) and Zero Derivative Voltage Switching (ZVDS), as well as some preliminary comparative results for 700W and 2 kW output power delivered to an induction heating coil and load.",
"title": ""
},
{
"docid": "e4861d48d54e0c48f241b5adb1a893e6",
"text": "With the rapid development of the World Wide Web, electronic word-of-mouth interaction has made consumers active participants. Nowadays, a large number of reviews posted by the consumers on the Web provide valuable information to other consumers. Such information is highly essential for decision making and hence popular among the internet users. This information is very valuable not only for prospective consumers to make decisions but also for businesses in predicting the success and sustainability. In this paper, a Gini Index based feature selection method with Support Vector Machine (SVM) classifier is proposed for sentiment classification for large movie review data set. The results show that our Gini Index method has better classification performance in terms of reduced error rate and accuracy.",
"title": ""
},
{
"docid": "ffa15e86e575d5fdf2ccf0dcafe74a93",
"text": "We introduce an efficient algorithm for the problem of online linear optimization in the bandit setting which achieves the optimal O*(√T)regret. The setting is a natural generalization of the nonstochastic multiarmed bandit problem, and the existence of an efficient optimal algorithm has been posed as an open problem in a number of recent papers. We show how the difficulties encountered by previous approaches are overcome by the use of a self-concordant potential function. Our approach presents a novel connection between online learning and interior point methods. Disciplines Statistics and Probability Comments At the time of publication, author Alexander Rakhlin was affiliated with the University of California, Berkeley. Currently, he is a faculty member at the Statistics Department at the University of Pennsylvania. This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/statistics_papers/110 Competing in the Dark: An Efficient Algorithm for Bandit Linear Optimization Jacob Abernethy Computer Science Division UC Berkeley jake@cs.berkeley.edu (eligible for best student paper award) Elad Hazan IBM Almaden hazan@us.ibm.com Alexander Rakhlin Computer Science Division UC Berkeley rakhlin@cs.berkeley.edu",
"title": ""
},
{
"docid": "96516274e1eb8b9c53296a935f67ca2a",
"text": "Recurrent neural networks that are <italic>trained</italic> to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidel discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can <italic>construct</italic> second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, that is, the constructed network correctly classifies strings of <italic>arbitrary length</italic>. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with <italic>n</italic> state and <italic>m</italic>input alphabet symbols, the constructive algorithm generates a “programmed” neural network with <italic>O</italic>(<italic>n</italic>) neurons and <italic>O</italic>(<italic>mn</italic>) weights. We compare our algorithm to other methods proposed in the literature.",
"title": ""
},
{
"docid": "c736258623c7f977ebc00f5555d13e02",
"text": "We present an important step towards the solution of the problem of inverse procedural modeling by generating parametric context-free L-systems that represent an input 2D model. The L-system rules efficiently code the regular structures and the parameters represent the properties of the structure transformations. The algorithm takes as input a 2D vector image that is composed of atomic elements, such as curves and poly-lines. Similar elements are recognized and assigned terminal symbols of an L-system alphabet. The terminal symbols’ position and orientation are pair-wise compared and the transformations are stored as points in multiple 4D transformation spaces. By careful analysis of the clusters in the transformation spaces, we detect sequences of elements and code them as L-system rules. The coded elements are then removed from the clusters, the clusters are updated, and then the analysis attempts to code groups of elements in (hierarchies) the same way. The analysis ends with a single group of elements that is coded as an L-system axiom. We recognize and code branching sequences of linearly translated, scaled, and rotated elements and their hierarchies. The L-system not only represents the input image, but it can also be used for various editing operations. By changing the L-system parameters, the image can be randomized, symmetrized, and groups of elements and regular structures can be edited. By changing the terminal and non-terminal symbols, elements or groups of elements can be replaced.",
"title": ""
},
{
"docid": "67ca6efda7f90024cc9ae50ebb4181b7",
"text": "Nowadays data growth is directly proportional to time and it is a major challenge to store the data in an organised fashion. Document clustering is the solution for organising relevant documents together. In this paper, a web clustering algorithm namely WDC-KABC is proposed to cluster the web documents effectively. The proposed algorithm uses the features of both K-means and Artificial Bee Colony (ABC) clustering algorithm. In this paper, ABC algorithm is employed as the global search optimizer and K-means is used for refining the solutions. Thus, the quality of the cluster is improved. The performance of WDC-KABC is analysed with four different datasets (webkb, wap, rec0 and 7sectors). The proposed algorithm is compared with existing algorithms such as K-means, Particle Swarm Optimization, Hybrid of Particle Swarm Optimization and K-means and Ant Colony Optimization. The experimental results of WDC-KABC are satisfactory, in terms of precision, recall, f-measure, accuracy and error rate.",
"title": ""
},
{
"docid": "ed2a2ede0be8581c0d719e247a2f1d96",
"text": "Beginning Application Lifecycle Management is a guide to an area of rapidly growing interest within the development community: managing the entire cycle of building software. ALM is an area that spans everything from requirements specifications to retirement of an IT-system or application. Because its techniques allow you to deal with the process of developing applications across many areas of responsibility and across many different disciplines, the benefits and effects of ALM techniques used on your project can be wide-ranging and pronounced. In this book, author Joachim Rossberg will show you what ALM is and why it matters. He will also show you how you can assess your current situation and how you can use this assessment to create the road ahead for improving or implementing your own ALM process across all of your team’s development efforts. Beginning Application Lifecycle Management can be implemented on any platform. This book uses Microsoft Team Foundation Server as a foundation in many examples, but the key elements are platform independent and you’ll find the book written in a platform agnostic way.",
"title": ""
},
{
"docid": "e4546038f0102d0faac18ac96e50793d",
"text": "Ontologies have been increasingly used as a core representation formalism in medical information systems. Diagnosis is one of the highly relevant reasoning problems in this domain. In recent years this problem has captured attention also in the description logics community and various proposals on formalising abductive reasoning problems and their computational support appeared. In this paper, we focus on a practical diagnostic problem from a medical domain – the diagnosis of diabetes mellitus – and we try to formalize it in DL in such a way that the expected diagnoses are abductively derived. Our aim in this work is to analyze abductive reasoning in DL from a practical perspective, considering more complex cases than trivial examples typically considered by the theoryor algorithm-centered literature, and to evaluate the expressivity as well as the particular formulation of the abductive reasoning problem needed to capture medical diagnosis.",
"title": ""
},
{
"docid": "9aad2d4dd17bb3906add18578df28580",
"text": "Likelihood ratio policy gradient methods have been some of the most successful reinforcement learning algorithms, especially for learning on physical systems. We describe how the likelihood ratio policy gradient can be derived from an importance sampling perspective. This derivation highlights how likelihood ratio methods under-use past experience by (i) using the past experience to estimate only the gradient of the expected return U(θ) at the current policy parameterization θ, rather than to obtain a more complete estimate of U(θ), and (ii) using past experience under the current policy only rather than using all past experience to improve the estimates. We present a new policy search method, which leverages both of these observations as well as generalized baselines—a new technique which generalizes commonly used baseline techniques for policy gradient methods. Our algorithm outperforms standard likelihood ratio policy gradient algorithms on several testbeds.",
"title": ""
},
{
"docid": "ec6e955f3f79ef1706fc6b9b16326370",
"text": "Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in the recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of data for training. In this paper, we develop a photo-realistic simulator that can afford the generation of large amounts of training data (both images rendered from the UAV camera and its controls) to teach a UAV to autonomously race through challenging tracks. We train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing. Training is done through imitation learning enabled by data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.",
"title": ""
},
{
"docid": "c450ac5c84d962bb7f2262cf48e1280a",
"text": "Animal-assisted therapies have become widespread with programs targeting a variety of pathologies and populations. Despite its popularity, it is unclear if this therapy is useful. The aim of this systematic review is to establish the efficacy of Animal assisted therapies in the management of dementia, depression and other conditions in adult population. A search was conducted in MEDLINE, EMBASE, CINAHL, LILACS, ScienceDirect, and Taylor and Francis, OpenGrey, GreyLiteratureReport, ProQuest, and DIALNET. No language or study type filters were applied. Conditions studied included depression, dementia, multiple sclerosis, PTSD, stroke, spinal cord injury, and schizophrenia. Only articles published after the year 2000 using therapies with significant animal involvement were included. 23 articles and dissertations met inclusion criteria. Overall quality was low. The degree of animal interaction significantly influenced outcomes. Results are generally favorable, but more thorough and standardized research should be done to strengthen the existing evidence.",
"title": ""
},
{
"docid": "c78c4dc2475c0fa382c2233c064efe4d",
"text": "We give the first algorithm for kernel Nyström approximation that runs in linear time in the number of training points and is provably accurate for all kernel matrices, without dependence on regularity or incoherence conditions. The algorithm projects the kernel onto a set of s landmark points sampled by their ridge leverage scores, requiring just O(ns) kernel evaluations and O(ns2) additional runtime. While leverage score sampling has long been known to give strong theoretical guarantees for Nyström approximation, by employing a fast recursive sampling scheme, our algorithm is the first to make the approach scalable. Empirically we show that it finds more accurate kernel approximations in less time than popular techniques such as classic Nyström approximation and the random Fourier features method.",
"title": ""
},
{
"docid": "90eeae710c92da9dd129647488b604c7",
"text": "Finding information is becoming a major part of our daily life. Entire sectors, from Web users to scientists and intelligence analysts, are increasingly struggling to keep up with the larger and larger amounts of content published every day. With this much data, it is often easy to miss the big picture.\n In this article, we investigate methods for automatically connecting the dots---providing a structured, easy way to navigate within a new topic and discover hidden connections. We focus on the news domain: given two news articles, our system automatically finds a coherent chain linking them together. For example, it can recover the chain of events starting with the decline of home prices (January 2007), and ending with the health care debate (2009).\n We formalize the characteristics of a good chain and provide a fast search-driven algorithm to connect two fixed endpoints. We incorporate user feedback into our framework, allowing the stories to be refined and personalized. We also provide a method to handle partially-specified endpoints, for users who do not know both ends of a story. Finally, we evaluate our algorithm over real news data. Our user studies demonstrate that the objective we propose captures the users’ intuitive notion of coherence, and that our algorithm effectively helps users understand the news.",
"title": ""
},
{
"docid": "497d72ce075f9bbcb2464c9ab20e28de",
"text": "Eukaryotic organisms radiated in Proterozoic oceans with oxygenated surface waters, but, commonly, anoxia at depth. Exceptionally preserved fossils of red algae favor crown group emergence more than 1200 million years ago, but older (up to 1600-1800 million years) microfossils could record stem group eukaryotes. Major eukaryotic diversification ~800 million years ago is documented by the increase in the taxonomic richness of complex, organic-walled microfossils, including simple coenocytic and multicellular forms, as well as widespread tests comparable to those of extant testate amoebae and simple foraminiferans and diverse scales comparable to organic and siliceous scales formed today by protists in several clades. Mid-Neoproterozoic establishment or expansion of eukaryophagy provides a possible mechanism for accelerating eukaryotic diversification long after the origin of the domain. Protists continued to diversify along with animals in the more pervasively oxygenated oceans of the Phanerozoic Eon.",
"title": ""
},
{
"docid": "ef9a51b5b3a4bcab7867819070801e8a",
"text": "For any given research area, one cannot tell how many studies have been conducted but never reported. The extreme view of the \"file drawer problem\" is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results. Quantitative procedures for computing the tolerance for filed and future null results are reported and illustrated, and the implications are discussed.",
"title": ""
},
{
"docid": "dc54b73eb740bc1bbdf1b834a7c40127",
"text": "This paper discusses the design and evaluation of an online social network used within twenty-two established after school programs across three major urban areas in the Northeastern United States. The overall goal of this initiative is to empower students in grades K-8 to prevent obesity through healthy eating and exercise. The online social network was designed to support communication between program participants. Results from the related evaluation indicate that the online social network has potential for advancing awareness and community action around health related issues; however, greater attention is needed to professional development programs for program facilitators, and design features could better support critical thinking, social presence, and social activity.",
"title": ""
}
] | scidocsrr |
dc8b4eb4d943b213d19618f6162d3159 | Similarities and Differences in Chinese and Caucasian Adults' Use of Facial Cues for Trustworthiness Judgments | [
{
"docid": "b66609e66cc9c3844974b3246b8f737e",
"text": "— Inspired by the evolutionary conjecture that sexually selected traits function as indicators of pathogen resistance in animals and humans, we examined the notion that human facial attractiveness provides evidence of health. Using photos of 164 males and 169 females in late adolescence and health data on these individuals in adolescence, middle adulthood, and later adulthood, we found that adolescent facial attractiveness was unrelated to adolescent health for either males or females, and was not predictive of health at the later times. We also asked raters to guess the health of each stimulus person from his or her photo. Relatively attractive stimulus persons were mistakenly rated as healthier than their peers. The correlation between perceived health and medically assessed health increased when attractiveness was statistically controlled, which implies that attractiveness suppressed the accurate recognition of health. These findings may have important implications for evolutionary models. 0 When social psychologists began in earnest to study physical attractiveness , they were startled by the powerful effect of facial attractiveness on choice of romantic partner (Walster, Aronson, Abrahams, & Rott-mann, 1966) and other aspects of human interaction (Berscheid & Wal-ster, 1974; Hatfield & Sprecher, 1986). More recent findings have been startling again in revealing that infants' preferences for viewing images of faces can be predicted from adults' attractiveness ratings of the faces The assumption that perceptions of attractiveness are culturally determined has thus given ground to the suggestion that they are in substantial part biologically based (Langlois et al., 1987). A biological basis for perception of facial attractiveness is aptly viewed as an evolutionary basis. It happens that evolutionists, under the rubric of sexual selection theory, have recently devoted increasing attention to the origin and function of sexually attractive traits in animal species (Andersson, 1994; Hamilton & Zuk, 1982). Sexual selection as a province of evolutionary theory actually goes back to Darwin (1859, 1871), who noted with chagrin that a number of animals sport an appearance that seems to hinder their survival chances. Although the females of numerous birds of prey, for example, are well camouflaged in drab plum-age, their mates wear bright plumage that must be conspicuous to predators. Darwin divined that the evolutionary force that \" bred \" the males' bright plumage was the females' preference for such showiness in a mate. Whereas Darwin saw aesthetic preferences as fundamental and did not seek to give them adaptive functions, other scholars, beginning …",
"title": ""
},
{
"docid": "cbf96988cc476a76bbf650bfa5b88e0e",
"text": "The authors examined the generalizability of first impressions from faces previously documented in industrialized cultures to the Tsimane’ people in the remote Bolivian rainforest. Tsimane’ as well as U.S. judges showed within-culture agreement in impressions of attractiveness, babyfaceness, and traits (healthy, intelligent/knowledgeable, dominant/respected, and sociable/warm) of own-culture faces. Both groups also showed within-culture agreement for impressions of otherculture faces, although it was weaker than for own-culture faces, particularly among Tsimane’ judges. Moreover, there was between-culture agreement, particularly for Tsimane’ faces. Use of facial attractiveness to judge traits contributed to agreement within and between cultures but did not fully explain it. Furthermore, Tsimane’, like U.S., judges showed a strong attractiveness halo in impressions of faces from both cultures as well as the babyface stereotype, albeit more weakly. In addition to cross-cultural similarities in trait impressions from faces, supporting a universal mechanism, some effects were moderated by perceiver and face culture, consistent with perceiver attunements conditioned by culturally specific perceptual learning.",
"title": ""
}
] | [
{
"docid": "733b391b2b3722b46a2790fe6fb1bf7a",
"text": "Physicians often use chest X-rays to quickly and cheaply diagnose disease associated with the area. However, it is much more difficult to make clinical diagnoses with chest X-rays than with other imaging modalities such as CT or MRI. With computer-aided diagnosis, physicians can make chest X-ray diagnoses more quickly and accurately. Pneumonia is often diagnosed with chest X-Rays and kills around 50,000 people each year [1]. With computeraided diagnosis of pneumonia specifically, physicians can more accurately and efficiently diagnose the disease. In this project, we hope to train a model using the dataset described below to help physicians in making diagnoses of pneumonia in chest X-Rays. Our problem is thus a binary classification where the inputs are chest X-ray images and the output is one of two classes: pneumonia or non-pneumonia.",
"title": ""
},
{
"docid": "3da5087d3ba29b772ce6dcd30d4c1b67",
"text": "We prove that the coefficients of certain weight −1/2 harmonic Maass forms are “traces” of singular moduli for weak Maass forms. To prove this theorem, we construct a theta lift from spaces of weight −2 harmonic weak Maass forms to spaces of weight −1/2 vectorvalued harmonic weak Maass forms on Mp2(Z), a result which is of independent interest. We then prove a general theorem which guarantees (with bounded denominator) when such Maass singular moduli are algebraic. As an example of these results, we derive a formula for the partition function p(n) as a finite sum of algebraic numbers which lie in the usual discriminant −24n + 1 ring class field.",
"title": ""
},
{
"docid": "cfe92b50318c2df44ce169b3dc818211",
"text": "As illegal and unhealthy content on the Internet has gradually increased in recent years, there have been constant calls for Internet content regulation. But any regulation comes at a cost. Based on the principles of the cost-benefit theory, this article conducts an in-depth discussion on China’s current Internet content regulation, so as to reveal its latent patterns.",
"title": ""
},
{
"docid": "49ac845ffb5f5ec66cb7c175ca30b4aa",
"text": "A Model of Organizational Knowledge Management Maturity based on People, Process, and Technology L.G. Pee A. Kankanhalli Dept. of Information Systems School of Computing National University of Singapore (Forthcoming Journal of Information and Knowledge Management) Abstract Organizations are increasingly investing in knowledge management (KM) initiatives to promote the sharing, application, and creation of knowledge for competitive advantage. To guide and assess the progress of KM initiatives in organizations, various models have been proposed but a consistent approach that has been empirically tested is lacking. Based on the life cycle theory, this paper reviews, compares, and integrates existing models to propose a General KM Maturity Model (G-KMMM). G-KMMM encompasses the initial, aware, defined, managed, and optimizing stages, which are differentiated in terms of their characteristics related to the people, process, and technology aspects of KM. To facilitate empirical validation and application, an accompanying assessment tool is also explicated. As an initial validation of the proposed G-KMMM, a case study of a multi-unit information system organization of a large public university was conducted. Findings indicate that GKMMM is a useful diagnostic tool that can assess and direct KM implementation in organizations.",
"title": ""
},
{
"docid": "3e3953e09f35c418316370f2318550aa",
"text": "Poker is ideal for testing automated reason ing under uncertainty. It introduces un certainty both by physical randomization and by incomplete information about op ponents' hands. Another source of uncer tainty is the limited information available to construct psychological models of opponents, their tendencies to bluff, play conservatively, reveal weakness, etc. and the relation be tween their hand strengths and betting be haviour. All of these uncertainties must be assessed accurately and combined effectively for any reasonable level of skill in the game to be achieved, since good decision making is highly sensitive to those tasks. We de scribe our Bayesian Poker Program (BPP) , which uses a Bayesian network to model the program's poker hand, the opponent's hand and the opponent's playing behaviour con ditioned upon the hand, and betting curves which govern play given a probability of win ning. The history of play with opponents is used to improve BPP's understanding of their behaviour. We compare BPP experimentally with: a simple rule-based system; a program which depends exclusively on hand probabil ities (i.e., without opponent modeling); and with human players. BPP has shown itself to be an effective player against all these opponents, barring the better humans. We also sketch out some likely ways of improv ing play.",
"title": ""
},
{
"docid": "7ca6872c94f1bb7cfcb3e05d91a5c097",
"text": "Increasing electrical energy demand, modern lifestyles and energy usage patterns have made the world fully dependant on power systems. This instigated mandatory requirements for the operators to maintain high reliability and stability of the power system grid. However, the power system is a highly nonlinear system, which changes its operations continuously. Therefore, it is very challenging and uneconomical to make the system be stable for all disturbances. The system is usually designed to handle a single outage at a time. However, during the last decade several major blackouts were reported and all of them started with single outages. Each major blackout was mandatorily and transparently reported to the public. The properly written blackout reports help to minimize the operational risk, by strengthening the system and its operations based on selected high risk contingencies. In the last decade, several major blackouts were reported separately in many research papers. This paper lists a good collection of properly reported literatures on power system stability and reliability including history of blackouts. Some critical comments on root causes, lessons learnt from the blackouts and solutions are addressed while briefly discussing the blackout events presented in published literatures.",
"title": ""
},
{
"docid": "45cbfbe0a0bcf70910a6d6486fb858f0",
"text": "Grid cells in the entorhinal cortex of freely moving rats provide a strikingly periodic representation of self-location which is indicative of very specific computational mechanisms. However, the existence of grid cells in humans and their distribution throughout the brain are unknown. Here we show that the preferred firing directions of directionally modulated grid cells in rat entorhinal cortex are aligned with the grids, and that the spatial organization of grid-cell firing is more strongly apparent at faster than slower running speeds. Because the grids are also aligned with each other, we predicted a macroscopic signal visible to functional magnetic resonance imaging (fMRI) in humans. We then looked for this signal as participants explored a virtual reality environment, mimicking the rats’ foraging task: fMRI activation and adaptation showing a speed-modulated six-fold rotational symmetry in running direction. The signal was found in a network of entorhinal/subicular, posterior and medial parietal, lateral temporal and medial prefrontal areas. The effect was strongest in right entorhinal cortex, and the coherence of the directional signal across entorhinal cortex correlated with spatial memory performance. Our study illustrates the potential power of combining single-unit electrophysiology with fMRI in systems neuroscience. Our results provide evidence for grid-cell-like representations in humans, and implicate a specific type of neural representation in a network of regions which supports spatial cognition and also autobiographical memory.",
"title": ""
},
{
"docid": "6eb7bb6f623475f7ca92025fd00dbc27",
"text": "Support vector machines (SVMs) have been recognized as one o f th most successful classification methods for many applications including text classific ation. Even though the learning ability and computational complexity of training in support vector machines may be independent of the dimension of the feature space, reducing computational com plexity is an essential issue to efficiently handle a large number of terms in practical applicat ions of text classification. In this paper, we adopt novel dimension reduction methods to reduce the dim nsion of the document vectors dramatically. We also introduce decision functions for the centroid-based classification algorithm and support vector classifiers to handle the classification p r blem where a document may belong to multiple classes. Our substantial experimental results sh ow t at with several dimension reduction methods that are designed particularly for clustered data, higher efficiency for both training and testing can be achieved without sacrificing prediction accu ra y of text classification even when the dimension of the input space is significantly reduced.",
"title": ""
},
{
"docid": "969ba9848fa6d02f74dabbce2f1fe3ab",
"text": "With the rapid growth of social media, massive misinformation is also spreading widely on social media, e.g., Weibo and Twitter, and brings negative effects to human life. Today, automatic misinformation identification has drawn attention from academic and industrial communities. Whereas an event on social media usually consists of multiple microblogs, current methods are mainly constructed based on global statistical features. However, information on social media is full of noise, which should be alleviated. Moreover, most of the microblogs about an event have little contribution to the identification of misinformation, where useful information can be easily overwhelmed by useless information. Thus, it is important to mine significant microblogs for constructing a reliable misinformation identification method. In this article, we propose an attention-based approach for identification of misinformation (AIM). Based on the attention mechanism, AIM can select microblogs with the largest attention values for misinformation identification. The attention mechanism in AIM contains two parts: content attention and dynamic attention. Content attention is the calculated-based textual features of each microblog. Dynamic attention is related to the time interval between the posting time of a microblog and the beginning of the event. To evaluate AIM, we conduct a series of experiments on the Weibo and Twitter datasets, and the experimental results show that the proposed AIM model outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "3e3dc575858c21806edbe6149475f5e0",
"text": "This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs (G), dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command’s hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as “Put the tire pallet on the truck.” The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot’s performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system’s performance. We demonstrate that our system can successfully follow many natural language commands from the corpus.",
"title": ""
},
{
"docid": "2b1002037b717f65e97defbf802d5fcd",
"text": "BACKGROUND\nDeletions of chromosome 19 have rarely been reported, with the exception of some patients with deletion 19q13.2 and Blackfan-Diamond syndrome due to haploinsufficiency of the RPS19 gene. Such a paucity of patients might be due to the difficulty in detecting a small rearrangement on this chromosome that lacks a distinct banding pattern. Array comparative genomic hybridisation (CGH) has become a powerful tool for the detection of microdeletions and microduplications at high resolution in patients with syndromic mental retardation.\n\n\nMETHODS AND RESULTS\nUsing array CGH, this study identified three interstitial overlapping 19q13.11 deletions, defining a minimal critical region of 2.87 Mb, associated with a clinically recognisable syndrome. The three patients share several major features including: pre- and postnatal growth retardation with slender habitus, severe postnatal feeding difficulties, microcephaly, hypospadias, signs of ectodermal dysplasia, and cutis aplasia over the posterior occiput. Interestingly, these clinical features have also been described in a previously reported patient with a 19q12q13.1 deletion. No recurrent breakpoints were identified in our patients, suggesting that no-allelic homologous recombination mechanism is not involved in these rearrangements.\n\n\nCONCLUSIONS\nBased on these results, the authors suggest that this chromosomal abnormality may represent a novel clinically recognisable microdeletion syndrome caused by haploinsufficiency of dosage sensitive genes in the 19q13.11 region.",
"title": ""
},
{
"docid": "8c07982729ca439c8e346cbe018a7198",
"text": "The need for diversification manifests in various recommendation use cases. In this work, we propose a novel approach to diversifying a list of recommended items, which maximizes the utility of the items subject to the increase in their diversity. From a technical perspective, the problem can be viewed as maximization of a modular function on the polytope of a submodular function, which can be solved optimally by a greedy method. We evaluate our approach in an offline analysis, which incorporates a number of baselines and metrics, and in two online user studies. In all the experiments, our method outperforms the baseline methods.",
"title": ""
},
{
"docid": "999a1fbc3830ca0453760595046edb6f",
"text": "This paper introduces BoostMap, a method that can significantly reduce retrieval time in image and video database systems that employ computationally expensive distance measures, metric or non-metric. Database and query objects are embedded into a Euclidean space, in which similarities can be rapidly measured using a weighted Manhattan distance. Embedding construction is formulated as a machine learning task, where AdaBoost is used to combine many simple, ID embeddings into a multidimensional embedding that preserves a significant amount of the proximity structure in the original space. Performance is evaluated in a hand pose estimation system, and a dynamic gesture recognition system, where the proposed method is used to retrieve approximate nearest neighbors under expensive image and video similarity measures: In both systems, in quantitative experiments, BoostMap significantly increases efficiency, with minimal losses in accuracy. Moreover, the experiments indicate that BoostMap compares favorably with existing embedding methods that have been employed in computer vision and database applications, i.e., FastMap and Bourgain embeddings.",
"title": ""
},
{
"docid": "065b0af0f1ed195ac90fa3ad041fa4c4",
"text": "We present CapWidgets, passive tangible controls for capacitive touch screens. CapWidgets bring back physical controls to off-the-shelf multi-touch surfaces as found in mobile phones and tablet computers. While the user touches the widget, the surface detects the capacitive marker on the widget's underside. We study the relative performance of this tangible interaction with direct multi-touch interaction and our experimental results show that user performance and preferences are not automatically in favor of tangible widgets and careful design is necessary to validate their properties.",
"title": ""
},
{
"docid": "e4227748f8fd9704aba160669dcdef52",
"text": "Broadly, artificial intelligence (AI) mainly entails technology constellations such as machine learning, natural language processing, perception, and reasoning since it is difficult to define [1]. Even though the field’s application and principles have undergone investigation for more than sixty-five years, modern improvements, attendant society excitement, and uses ensured its return to focus. The influence of the previous artificial intelligence systems is evident, introducing both opportunities and challenges, which enables the integration of future AI advances into the economic and social environments. It is apparent that most people today view AI as a robotics concept but it essentially incorporates broader technology ranges that are used widely [2]. From search engines to speech recognition, to learning/gaming structures and object detection, AI application has the potential to intensify in the human daily lives. The application is already experiencing use in the world of business as companies seek to study the needs of the consumers, as well as, other fields including healthcare and crime investigation. In this paper, I will discuss the perceptions of consumers regarding artificial intelligence and outline its impact in retail, healthcare, crime investigation, and employment.",
"title": ""
},
{
"docid": "b96853c2efbc22e4f636d90650bfd4fc",
"text": "BACKGROUND AND AIMS:The prevalence of functional dyspepsia (FD) in the general population is not known. The aim of this study is to measure the prevalence of FD and its risk factors in a multiethnic volunteer sample of the U.S. population.METHODS:One thousand employees at the Houston VA Medical Center were targeted with a symptom questionnaire asking about upper abdominal symptoms, followed by a request to undergo endsocopy. Dyspepsia was defined by the presence of epigastric pain, fullness, nausea, or vomiting, and FD was defined as dyspepsia in the absence of esophageal erosions, gastric ulcers, or duodenal ulcers or erosions. The presence of dyspepsia and FD was examined in multiple logistic regression analyses.RESULTS:A total of 465 employees completed the relevant questions and of those 203 had endoscopic examination. The age-adjusted prevalence rate of dyspepsia was 31.9 per 100 (95% CI: 26.7–37.1), and 15.8 per 100 (95% CI: 9.6–22.0) if participants with concomitant heartburn or acid regurgitation were excluded. Subjects with dyspepsia were more likely to report smoking, using antacids, aspirin or nonsteroidal antiinflammatory drugs (NSAIDs), and consulting a physician for their symptoms (p < 0.05) than participants without dyspepsia. Most (64.5%) participants with dyspepsia who underwent endoscopy had FD. The age-adjusted prevalence rate of FD was 29.2 per 100 (95% CI: 21.9–36.5), and 15.0 per 100 (6.7–23.3) if subjects with GERD were excluded. Apart from a trend towards association with older age in the multiple regression analysis, there were no significant predictors of FD among participants with dyspepsia.CONCLUSIONS:Most subjects with dyspepsia have FD. The prevalence of FD is high but predictors of FD remain poorly defined.",
"title": ""
},
{
"docid": "0caa6d4623fb0414facb76ccd8eaa235",
"text": "Because of large amounts of unstructured text data generated on the Internet, text mining is believed to have high commercial value. Text mining is the process of extracting previously unknown, understandable, potential and practical patterns or knowledge from the collection of text data. This paper introduces the research status of text mining. Then several general models are described to know text mining in the overall perspective. At last we classify text mining work as text categorization, text clustering, association rule extraction and trend analysis according to applications.",
"title": ""
},
{
"docid": "80ccbda4de8a765111ad8994f2ac9e95",
"text": "Smart grid, smart metering, electromobility, and the regulation of the power network are keywords of the transition in energy politics. In the future, the power grid will be smart. Based on different works, this article presents a data collection, analyzing, and monitoring software for a reference smart grid. We discuss two possible architectures for collecting data from energy analyzers and analyze their performance with respect to real-time monitoring, load peak analysis, and automated regulation of the power grid. In the first architecture, we analyze the latency, needed bandwidth, and scalability for collecting data over the Modbus TCP/IP protocol and in the second one over a RESTful web service. The analysis results show that the solution with Modbus is more scalable as the one with RESTful web service. However, the performance and scalability of both architectures are sufficient for our reference smart grid and",
"title": ""
},
{
"docid": "723d1a0cd7a65d0ac164c2749d481884",
"text": "...................................................................................................................v 1 Purpose of the Research and Development Effort........................................1 2 Defining Interoperability .................................................................................3 3 Models of Interoperability ...............................................................................5 3.1 Levels of Information System Interoperability ............................................5 3.2 Organizational Interoperability Maturity Model ...........................................6 3.3 NATO C3 Technical Architecture (NC3TA) Reference Model for Interoperability...........................................................................................7 3.4 Levels of Conceptual Interoperability (LCIM) Model...................................8 3.5 Layers of Coalition Interoperability.............................................................9 3.6 The System of Systems Interoperability (SOSI) Model ..............................9 4 Approach........................................................................................................13 4.1 Method ....................................................................................................13 4.2 Collaborators...........................................................................................14 5 Results: Current State ...................................................................................15 5.1 Observations on the SOSI Model ............................................................15 6 DoD Interoperability Initiatives .....................................................................17 6.1 Commands, Directorates and Centers.....................................................17 6.2 Standards ................................................................................................20 6.3 Strategies ................................................................................................20 6.4 Demonstrations, Exercises and Testbeds ................................................21 6.5 Joint and Coalition Force Integration Initiatives........................................22 6.6 DoD-Sponsored Research.......................................................................25 6.7 Other Initiatives .......................................................................................26 7 Interview and Workshop Findings................................................................27 7.1 General Themes......................................................................................27",
"title": ""
},
{
"docid": "3bbf4bd1daaf0f6f916268907410b88f",
"text": "UNLABELLED\nNoncarious cervical lesions are highly prevalent and may have different etiologies. Regardless of their origin, be it acid erosion, abrasion, or abfraction, restoring these lesions can pose clinical challenges, including access to the lesion, field control, material placement and handling, marginal finishing, patient discomfort, and chair time. This paper describes a novel technique for minimizing these challenges and optimizing the restoration of noncarious cervical lesions using a technique the author describes as the class V direct-indirect restoration. With this technique, clinicians can create precise extraoral margin finishing and polishing, while maintaining periodontal health and controlling polymerization shrinkage stress.\n\n\nCLINICAL SIGNIFICANCE\nThe clinical technique described in this article has the potential for being used routinely in treating noncarious cervical lesions, especially in cases without easy access and limited field control. Precise margin finishing and polishing is one of the greatest benefits of the class V direct-indirect approach, as the author has seen it work successfully in his practice over the past five years.",
"title": ""
}
] | scidocsrr |
7441e5c76b17cf1f246c3efebf0dd644 | PROBLEMS OF EMPLOYABILITY-A STUDY OF JOB – SKILL AND QUALIFICATION MISMATCH | [
{
"docid": "8e74a27a3edea7cf0e88317851bc15eb",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] | [
{
"docid": "c08e9731b9a1135b7fb52548c5c6f77e",
"text": "Many geometry processing applications, such as morphing, shape blending, transfer of texture or material properties, and fitting template meshes to scan data, require a bijective mapping between two or more models. This mapping, or cross-parameterization, typically needs to preserve the shape and features of the parameterized models, mapping legs to legs, ears to ears, and so on. Most of the applications also require the models to be represented by compatible meshes, i.e. meshes with identical connectivity, based on the cross-parameterization. In this paper we introduce novel methods for shape preserving cross-parameterization and compatible remeshing. Our cross-parameterization method computes a low-distortion bijective mapping between models that satisfies user prescribed constraints. Using this mapping, the remeshing algorithm preserves the user-defined feature vertex correspondence and the shape correlation between the models. The remeshing algorithm generates output meshes with significantly fewer elements compared to previous techniques, while accurately approximating the input geometry. As demonstrated by the examples, the compatible meshes we construct are ideally suitable for morphing and other geometry processing applications.",
"title": ""
},
{
"docid": "6b1e67c1768f9ec7a6ab95a9369b92d1",
"text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.",
"title": ""
},
{
"docid": "9c97a3ea2acfe09e3c60cbcfa35bab7d",
"text": "In comparison with document summarization on the articles from social media and newswire, argumentative zoning (AZ) is an important task in scientific paper analysis. Traditional methodology to carry on this task relies on feature engineering from different levels. In this paper, three models of generating sentence vectors for the task of sentence classification were explored and compared. The proposed approach builds sentence representations using learned embeddings based on neural network. The learned word embeddings formed a feature space, to which the examined sentence is mapped to. Those features are input into the classifiers for supervised classification. Using 10-cross-validation scheme, evaluation was conducted on the Argumentative-Zoning (AZ) annotated articles. The results showed that simply averaging the word vectors in a sentence works better than the paragraph to vector algorithm and by integrating specific cuewords into the loss function of the neural network can improve the classification performance. In comparison with the hand-crafted features, the word2vec method won for most of the categories. However, the hand-crafted features showed their strength on classifying some of the categories.",
"title": ""
},
{
"docid": "11e2ec2aab62ba8380e82a18d3fcb3d8",
"text": "In this paper we describe our effort to create a dataset for the evaluation of cross-language textual similarity detection. We present preexisting corpora and their limits and we explain the various gathered resources to overcome these limits and build our enriched dataset. The proposed dataset is multilingual, includes cross-language alignment for different granularities (from chunk to document), is based on both parallel and comparable corpora and contains human and machine translated texts. Moreover, it includes texts written by multiple types of authors (from average to professionals). With the obtained dataset, we conduct a systematic and rigorous evaluation of several state-of-the-art cross-language textual similarity detection methods. The evaluation results are reviewed and discussed. Finally, dataset and scripts are made publicly available on GitHub: http://github.com/FerreroJeremy/Cross-Language-Dataset.",
"title": ""
},
{
"docid": "c38c2d8f7c21acc3fcb9b7d9ecc6d2d1",
"text": "In this paper we proposed new technique for human identification using fusion of both face and speech which can substantially improve the rate of recognition as compared to the single biometric identification for security system development. The proposed system uses principal component analysis (PCA) as feature extraction techniques which calculate the Eigen vectors and Eigen values. These feature vectors are compared using the similarity measure algorithm like Mahalanobis Distances for the decision making. The Mel-Frequency cestrum coefficients (MFCC) feature extraction techniques are used for speech recognition in our project. Cross correlation coefficients are considered as primary features. The Hidden Markov Model (HMM) is used to calculate the like hoods in the MFCC extracted features to make the decision about the spoken wards.",
"title": ""
},
{
"docid": "c8984cf950244f0d300c6446bcb07826",
"text": "The grounded theory approach to doing qualitative research in nursing has become very popular in recent years. I confess to never really having understood Glaser and Strauss' original book: The Discovery of Grounded Theory. Since they wrote it, they have fallen out over what grounded theory might be and both produced their own versions of it. I welcomed, then, Kathy Charmaz's excellent and practical guide.",
"title": ""
},
{
"docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd",
"text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.",
"title": ""
},
{
"docid": "abec336a59db9dd1fdea447c3c0ff3d3",
"text": "Neural network training relies on our ability to find “good” minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and wellchosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple “filter normalization” method that helps us visualize loss function curvature and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.",
"title": ""
},
{
"docid": "8c95392ab3cc23a7aa4f621f474d27ba",
"text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.",
"title": ""
},
{
"docid": "2062b94ee661e5e50cbaa1c952043114",
"text": "The harsh operating environment of the automotive application makes the semi-permanent connector susceptible to intermittent high contact resistance which eventually leads to failure. Fretting corrosion is often the cause of these failures. However, laboratory testing of sample contact materials produce results that do not correlate with commercially tested connectors. A multicontact (M-C) reliability model is developed to bring together the fundamental studies and studies conducted on commercially available connector terminals. It is based on fundamental studies of the single contact interfaces and applied to commercial multicontact terminals. The model takes into consideration firstly, that a single contact interface may recover to low contact resistance after attaining a high value and secondly, that a terminal consists of more than one contact interface. For the connector to fail, all contact interfaces have to be in the failed state at the same time.",
"title": ""
},
{
"docid": "d8a7ab2abff4c2e5bad845a334420fe6",
"text": "Tone-mapping operators (TMOs) are designed to generate perceptually similar low-dynamic-range images from high-dynamic-range ones. We studied the performance of 15 TMOs in two psychophysical experiments where observers compared the digitally generated tone-mapped images to their corresponding physical scenes. All experiments were performed in a controlled environment, and the setups were designed to emphasize different image properties: in the first experiment we evaluated the local relationships among intensity levels, and in the second one we evaluated global visual appearance among physical scenes and tone-mapped images, which were presented side by side. We ranked the TMOs according to how well they reproduced the results obtained in the physical scene. Our results show that ranking position clearly depends on the adopted evaluation criteria, which implies that, in general, these tone-mapping algorithms consider either local or global image attributes but rarely both. Regarding the question of which TMO is the best, KimKautz [\"Consistent tone reproduction,\" in Proceedings of Computer Graphics and Imaging (2008)] and Krawczyk [\"Lightness perception in tone reproduction for high dynamic range images,\" in Proceedings of Eurographics (2005), p. 3] obtained the better results across the different experiments. We conclude that more thorough and standardized evaluation criteria are needed to study all the characteristics of TMOs, as there is ample room for improvement in future developments.",
"title": ""
},
{
"docid": "d0cdbd1137e9dca85d61b3d90789d030",
"text": "In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).",
"title": ""
},
{
"docid": "79425b2b27a8f80d2c4012c76e6eb8f6",
"text": "This paper examines previous Technology Acceptance Model (TAM)-related studies in order to provide an expanded model that explains consumers’ acceptance of online purchasing. Our model provides extensions to the original TAM by including constructs such as social influence and voluntariness; it also examines the impact of external variables including trust, privacy, risk, and e-loyalty. We surveyed consumers in the United States and Australia. Our findings suggest that our expanded model serves as a very good predictor of consumers’ online purchasing behaviors. The linear regression model shows a respectable amount of variance explained for Behavioral Intention (R 2 = .627). Suggestions are provided for the practitioner and ideas are presented for future research.",
"title": ""
},
{
"docid": "b591b75b4653c01e3525a0889e7d9b90",
"text": "The concept of isogeometric analysis is proposed. Basis functions generated from NURBS (Non-Uniform Rational B-Splines) are employed to construct an exact geometric model. For purposes of analysis, the basis is refined and/or its order elevated without changing the geometry or its parameterization. Analogues of finite element hand p-refinement schemes are presented and a new, more efficient, higher-order concept, k-refinement, is introduced. Refinements are easily implemented and exact geometry is maintained at all levels without the necessity of subsequent communication with a CAD (Computer Aided Design) description. In the context of structural mechanics, it is established that the basis functions are complete with respect to affine transformations, meaning that all rigid body motions and constant strain states are exactly represented. Standard patch tests are likewise satisfied. Numerical examples exhibit optimal rates of convergence for linear elasticity problems and convergence to thin elastic shell solutions. A k-refinement strategy is shown to converge toward monotone solutions for advection–diffusion processes with sharp internal and boundary layers, a very surprising result. It is argued that isogeometric analysis is a viable alternative to standard, polynomial-based, finite element analysis and possesses several advantages. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b7c0864be28d70d49ae4a28fb7d78f04",
"text": "UNLABELLED\nThe replacement of crowns and bridges is a common procedure for many dental practitioners. When correctly planned and executed, fixed prostheses will provide predictable function, aesthetics and value for money. However, when done poorly, they are more likely to fail prematurely and lead to irreversible damage to the teeth and supporting structures beneath. Sound diagnosis, assessment and technical skills are essential when dealing with failed or failing fixed restorations. These skills are essential for the 21st century dentist. This paper, with treated clinical examples, illustrates the areas of technical skill and clinical decisions needed for this type of work. It also provides advice on how the risk of premature failure can, in general, be further reduced. The article also confirms the very real risk in the UK of dento-legal problems when patients experience unexpected problems with their crowns and bridges.\n\n\nCLINICAL RELEVANCE\nThis paper outlines clinical implications of failed fixed prosthodontics to the dental surgeon. It also discusses factors that we can all use to predict and reduce the risk of premature restoration failure. Restoration design, clinical execution and patient factors are the most frequent reasons for premature problems. It is worth remembering (and informing patients) that the health of the underlying supporting dental tissue is often irreversibly compromised at the time of fixed restoration failure.",
"title": ""
},
{
"docid": "dc883936f3cc19008983c9a5bb2883f3",
"text": "Laparoscopic surgery provides patients with less painful surgery but is more demanding for the surgeon. The increased technological complexity and sometimes poorly adapted equipment have led to increased complaints of surgeon fatigue and discomfort during laparoscopic surgery. Ergonomic integration and suitable laparoscopic operating room environment are essential to improve efficiency, safety, and comfort for the operating team. Understanding ergonomics can not only make life of surgeon comfortable in the operating room but also reduce physical strains on surgeon.",
"title": ""
},
{
"docid": "e9b438cfe853e98f05b661f9149c0408",
"text": "Misinformation and fact-checking are opposite forces in the news environment: the former creates inaccuracies to mislead people, while the latter provides evidence to rebut the former. These news articles are often posted on social media and attract user engagement in the form of comments. In this paper, we investigate linguistic (especially emotional and topical) signals expressed in user comments in the presence of misinformation and fact-checking. We collect and analyze a dataset of 5,303 social media posts with 2,614,374 user comments from Facebook, Twitter, and YouTube, and associate these posts to fact-check articles from Snopes and PolitiFact for veracity rulings (i.e., from true to false). We find that linguistic signals in user comments vary significantly with the veracity of posts, e.g., we observe more misinformation-awareness signals and extensive emoji and swear word usage with falser posts. We further show that these signals can help to detect misinformation. In addition, we find that while there are signals indicating positive effects after fact-checking, there are also signals indicating potential \"backfire\" effects.",
"title": ""
},
{
"docid": "cf5829d1bfa1ae243bbf67776b53522d",
"text": "There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.",
"title": ""
},
{
"docid": "018b25742275dd628c58208e5bd5a532",
"text": "Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.",
"title": ""
},
{
"docid": "6ef04225b5f505a48127594a12fef112",
"text": "For differential operators of order 2, this paper presents a new method that combines generalized exponents to find those solutions that can be represented in terms of Bessel functions.",
"title": ""
}
] | scidocsrr |
7ff2b333260bdd17508da12bebfd92a6 | Mistaking minds and machines: How speech affects dehumanization and anthropomorphism. | [
{
"docid": "446fa2bda9922dfd9c18b1c49520dff3",
"text": "Anthropomorphism describes the tendency to imbue the real or imagined behavior of nonhuman agents with humanlike characteristics, motivations, intentions, or emotions. Although surprisingly common, anthropomorphism is not invariant. This article describes a theory to explain when people are likely to anthropomorphize and when they are not, focused on three psychological determinants--the accessibility and applicability of anthropocentric knowledge (elicited agent knowledge), the motivation to explain and understand the behavior of other agents (effectance motivation), and the desire for social contact and affiliation (sociality motivation). This theory predicts that people are more likely to anthropomorphize when anthropocentric knowledge is accessible and applicable, when motivated to be effective social agents, and when lacking a sense of social connection to other humans. These factors help to explain why anthropomorphism is so variable; organize diverse research; and offer testable predictions about dispositional, situational, developmental, and cultural influences on anthropomorphism. Discussion addresses extensions of this theory into the specific psychological processes underlying anthropomorphism, applications of this theory into robotics and human-computer interaction, and the insights offered by this theory into the inverse process of dehumanization.",
"title": ""
}
] | [
{
"docid": "c23a86bc6d8011dab71ac5e1e2051c3b",
"text": "The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher’s flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average memory usage of AlexNet by 61% and OverFeat by 83%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA K40 GPU card containing 12 GB of memory, with 22% performance loss compared to a hypothetical GPU with enough memory to hold the entire DNN.",
"title": ""
},
{
"docid": "787ed9e3816d70ceb04a366d9d0cb51e",
"text": "We propose a novel method for binarization of color documents whereby the foreground text is output as black and the background as white regardless of the polarity of foreground-background shades. The method employs an edge-based connected component approach and automatically determines a threshold for each component. It has several advantages over existing binarization methods. Firstly, it can handle documents with multi-colored texts with different background shades. Secondly, the method is applicable to documents having text of widely varying sizes, usually not handled by local binarization methods. Thirdly, the method automatically computes the threshold for binarization and the logic for inverting the output from the image data and does not require any input parameter. The proposed method has been applied to a broad domain of target document types and environment and is found to have a good adaptability.",
"title": ""
},
{
"docid": "6f31b0ba60dccb6f1c4ac3e4161f8a44",
"text": "In this work, we propose an alternative solution for parallel wave generation by WaveNet. In contrast to parallel WaveNet (Oord et al., 2018), we distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet by minimizing a novel regularized KL divergence between their highly-peaked output distributions. Our method computes the KL divergence in closed-form, which simplifies the training algorithm and provides very efficient distillation. In addition, we propose the first text-to-wave neural architecture for speech synthesis, which is fully convolutional and enables fast end-to-end training from scratch. It significantly outperforms the previous pipeline that connects a text-to-spectrogram model to a separately trained WaveNet (Ping et al., 2018). We also successfully distill a parallel waveform synthesizer conditioned on the hidden representation in this end-to-end model. 2",
"title": ""
},
{
"docid": "fe48a551dfbe397b7bcf52e534dfcf00",
"text": "This meta-analysis of 12 dependent variables from 9 quantitative studies comparing music to no-music conditions during treatment of children and adolescents with autism resulted in an overall effect size of d =.77 and a mean weighted correlation of r =.36 (p =.00). Since the confidence interval did not include 0, results were considered to be significant. All effects were in a positive direction, indicating benefits of the use of music in intervention. The homogeneity Q value was not significant (p =.83); therefore, results of included studies are considered to be homogeneous and explained by the overall effect size. The significant effect size, combined with the homogeneity of the studies, leads to the conclusion that all music intervention, regardless of purpose or implementation, has been effective for children and adolescents with autism. Included studies are described in terms of type of dependent variables measured; theoretical approach; number of subjects in treatment sessions; participation in and use, selection, and presentation of music; researcher discipline; published or unpublished source; and subject age. Clinical implications as well as recommendations for future research are discussed.",
"title": ""
},
{
"docid": "a9d5e1a113052c00823ebf6145ec38e6",
"text": "Deploying an automatic speech recognition system with reasonable performance requires expensive and time-consuming in-domain transcription. Previous work demonstrated that non-professional annotation through Amazon’s Mechanical Turk can match professional quality. We use Mechanical Turk to transcribe conversational speech for as little as one thirtieth the cost of professional transcription. The higher disagreement of nonprofessional transcribers does not have a significant effect on system performance. While previous work demonstrated that redundant transcription can improve data quality, we found that resources are better spent collecting more data. Finally, we suggest a concrete method for quality control without needing professional transcription.",
"title": ""
},
{
"docid": "749294846f355424b1c360b21e054fea",
"text": "BACKGROUND\nResults of small trials suggest that early interventions for social communication are effective for the treatment of autism in children. We therefore investigated the efficacy of such an intervention in a larger trial.\n\n\nMETHODS\nChildren with core autism (aged 2 years to 4 years and 11 months) were randomly assigned in a one-to-one ratio to a parent-mediated communication-focused (Preschool Autism Communication Trial [PACT]) intervention or treatment as usual at three specialist centres in the UK. Those assigned to PACT were also given treatment as usual. Randomisation was by use of minimisation of probability in the marginal distribution of treatment centre, age (</=42 months or >42 months), and autism severity (Autism Diagnostic Observation Schedule-Generic [ADOS-G] algorithm score 12-17 or 18-24). Primary outcome was severity of autism symptoms (a total score of social communication algorithm items from ADOS-G, higher score indicating greater severity) at 13 months. Complementary secondary outcomes were measures of parent-child interaction, child language, and adaptive functioning in school. Analysis was by intention to treat. This study is registered as an International Standard Randomised Controlled Trial, number ISRCTN58133827.\n\n\nRESULTS\n152 children were recruited. 77 were assigned to PACT (London [n=26], Manchester [n=26], and Newcastle [n=25]); and 75 to treatment as usual (London [n=26], Manchester [n=26], and Newcastle [n=23]). At the 13-month endpoint, the severity of symptoms was reduced by 3.9 points (SD 4.7) on the ADOS-G algorithm in the group assigned to PACT, and 2.9 (3.9) in the group assigned to treatment as usual, representing a between-group effect size of -0.24 (95% CI -0.59 to 0.11), after adjustment for centre, sex, socioeconomic status, age, and verbal and non-verbal abilities. Treatment effect was positive for parental synchronous response to child (1.22, 0.85 to 1.59), child initiations with parent (0.41, 0.08 to 0.74), and for parent-child shared attention (0.33, -0.02 to 0.68). Effects on directly assessed language and adaptive functioning in school were small.\n\n\nINTERPRETATION\nOn the basis of our findings, we cannot recommend the addition of the PACT intervention to treatment as usual for the reduction of autism symptoms; however, a clear benefit was noted for parent-child dyadic social communication.\n\n\nFUNDING\nUK Medical Research Council, and UK Department for Children, Schools and Families.",
"title": ""
},
{
"docid": "565ba6935c4fd6afdb4d393553a70d0b",
"text": "This paper presents the problem definition and guidelines of the next generation stru control benchmark problem for seismically excited buildings. Focusing on a 20-story steel s ture representing a typical midto high-rise building designed for the Los Angeles, Califo region, the goal of this study is to provide a clear basis to evaluate the efficacy of various tural control strategies. An evaluationmodel has been developed that portrays the salient feat of the structural system. Control constraints and evaluation criteria are presented for the problem. The task of each participant in this benchmark study is to define (including devices sors and control algorithms), evaluate and report on their proposed control strategies. Thes egies may be either passive, active, semi-active or a combination thereof. A simulation pro has been developed and made available to facilitate direct comparison of the efficiency and of the various control strategies. To illustrate some of the design challenges a sample contr tem design is presented, although this sample is not intended to be viewed as a comp design. Introduction The protection of civil structures, including material content and human occupants, is out a doubt a world-wide priority. The extent of protection may range from reliable operation occupant comfort to human and structural survivability. Civil structures, including existing future buildings, towers and bridges, must be adequately protected from a variety of e including earthquakes, winds, waves and traffic. The protection of structures is now moving relying entirely on the inelastic deformation of the structure to dissipate the energy of s dynamic loadings, to the application of passive, active and semi-active structural control de to mitigate undesired responses to dynamic loads. In the last two decades, many control algorithms and devices have been proposed fo engineering applications (Soong 1990; Housner, et al. 1994; Soong and Constantinou 199 Fujino,et al. 1996; Spencer and Sain 1997), each of which has certain advantages, depend the specific application and the desired objectives. At the present time, structural control res is greatly diversified with regard to these specific applications and desired objectives. A com basis for comparison of the various algorithms and devices does not currently exist. Deter 1. Prof., Dept. of Civil Engrg. and Geo. Sci., Univ. of Notre Dame, Notre Dame, IN 46556-0767. 2. Doc. Cand., Dept. of Civil Engrg. and Geo. Sci., Univ. of Notre Dame, Notre Dame, IN 46556-0767. 3. Assist. Prof., Dept. of Civil Engrg., Washington Univ., St. Louis, MO 63130-4899. March 22, 1999 1 Spencer, et al.",
"title": ""
},
{
"docid": "021d51e8152d2e2a9a834b5838139605",
"text": "Social networking sites (SNSs) have gained substantial popularity among youth in recent years. However, the relationship between the use of these Web-based platforms and mental health problems in children and adolescents is unclear. This study investigated the association between time spent on SNSs and unmet need for mental health support, poor self-rated mental health, and reports of psychological distress and suicidal ideation in a representative sample of middle and high school children in Ottawa, Canada. Data for this study were based on 753 students (55% female; Mage=14.1 years) in grades 7-12 derived from the 2013 Ontario Student Drug Use and Health Survey. Multinomial logistic regression was used to examine the associations between mental health variables and time spent using SNSs. Overall, 25.2% of students reported using SNSs for more than 2 hours every day, 54.3% reported using SNSs for 2 hours or less every day, and 20.5% reported infrequent or no use of SNSs. Students who reported unmet need for mental health support were more likely to report using SNSs for more than 2 hours every day than those with no identified unmet need for mental health support. Daily SNS use of more than 2 hours was also independently associated with poor self-rating of mental health and experiences of high levels of psychological distress and suicidal ideation. The findings suggest that students with poor mental health may be greater users of SNSs. These results indicate an opportunity to enhance the presence of health service providers on SNSs in order to provide support to youth.",
"title": ""
},
{
"docid": "cba9f80ab39de507e84b68dc598d0bb9",
"text": "In this paper we construct a noncommutative space of “pointed Drinfeld modules” that generalizes to the case of function fields the noncommutative spaces of commensurability classes of Q-lattices. It extends the usual moduli spaces of Drinfeld modules to possibly degenerate level structures. In the second part of the paper we develop some notions of quantum statistical mechanics in positive characteristic and we show that, in the case of Drinfeld modules of rank one, there is a natural time evolution on the associated noncommutative space, which is closely related to the positive characteristic L-functions introduced by Goss. The points of the usual moduli space of Drinfeld modules define KMS functionals for this time evolution. We also show that the scaling action on the dual system is induced by a Frobenius action, up to a Wick rotation to imaginary time. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "99549d037b403f78f273b3c64181fd21",
"text": "From social media has emerged continuous needs for automatic travel recommendations. Collaborative filtering (CF) is the most well-known approach. However, existing approaches generally suffer from various weaknesses. For example , sparsity can significantly degrade the performance of traditional CF. If a user only visits very few locations, accurate similar user identification becomes very challenging due to lack of sufficient information for effective inference. Moreover, existing recommendation approaches often ignore rich user information like textual descriptions of photos which can reflect users' travel preferences. The topic model (TM) method is an effective way to solve the “sparsity problem,” but is still far from satisfactory. In this paper, an author topic model-based collaborative filtering (ATCF) method is proposed to facilitate comprehensive points of interest (POIs) recommendations for social users. In our approach, user preference topics, such as cultural, cityscape, or landmark, are extracted from the geo-tag constrained textual description of photos via the author topic model instead of only from the geo-tags (GPS locations). Advantages and superior performance of our approach are demonstrated by extensive experiments on a large collection of data.",
"title": ""
},
{
"docid": "7e7cf44ce3c8982f61c6a93b89aa66e3",
"text": "This paper presents SceneCut, a novel approach to jointly discover previously unseen objects and non-object surfaces using a single RGB-D image. SceneCut's joint reasoning over scene semantics and geometry allows a robot to detect and segment object instances in complex scenes where modern deep learning-based methods either fail to separate object instances, or fail to detect objects that were not seen during training. SceneCut automatically decomposes a scene into meaningful regions which either represent objects or scene surfaces. The decomposition is qualified by an unified energy function over objectness and geometric fitting. We show how this energy function can be optimized efficiently by utilizing hierarchical segmentation trees. Moreover, we leverage a pre-trained convolutional oriented boundary network to predict accurate boundaries from images, which are used to construct high-quality region hierarchies. We evaluate SceneCut on several different indoor environments, and the results show that SceneCut significantly outperforms all the existing methods.",
"title": ""
},
{
"docid": "1f86ed06a01e7a37c5ce96d776b95511",
"text": "This paper presents a technique for incorporating terrain traversability data into a global path planning method for field mobile robots operating on rough natural terrain. The focus of this approach is on assessing the traversability characteristics of the global terrain using a multi-valued map representation of traversal dificulty, and using this information to compute a traversal cost function to ensure robot survivability. The traversal cost is then utilized by a global path planner to find an optimally safe path through the terrain. A graphical simulator for the terrain-basedpath planning is presented. The path planner is applied to a commercial Pioneer 2-AT robot andfield test results are provided.",
"title": ""
},
{
"docid": "0829cf1fb1654525627fdc61d1814196",
"text": "The selection of indexing terms for representing documents is a key decision that limits how effective subsequent retrieval can be. Often stemming algorithms are used to normalize surface forms, and thereby address the problem of not finding documents that contain words related to query terms through infectional or derivational morphology. However, rule-based stemmers are not available for every language and it is unclear which methods for coping with morphology are most effective. In this paper we investigate an assortment of techniques for representing text and compare these approaches using data sets in eighteen languages and five different writing systems.\n We find character n-gram tokenization to be highly effective. In half of the languages examined n-grams outperform unnormalized words by more than 25%; in highly infective languages relative improvements over 50% are obtained. In languages with less morphological richness the choice of tokenization is not as critical and rule-based stemming can be an attractive option, if available. We also conducted an experiment to uncover the source of n-gram power and a causal relationship between the morphological complexity of a language and n-gram effectiveness was demonstrated.",
"title": ""
},
{
"docid": "4a39ad1bac4327a70f077afa1d08c3f0",
"text": "Machine learning plays a role in many aspects of modern IR systems, and deep learning is applied in all of them. The fast pace of modern-day research has given rise to many approaches to many IR problems. The amount of information available can be overwhelming both for junior students and for experienced researchers looking for new research topics and directions. The aim of this full- day tutorial is to give a clear overview of current tried-and-trusted neural methods in IR and how they benefit IR.",
"title": ""
},
{
"docid": "68a31c4830f71e7e94b90227d69b5a79",
"text": "For many primary storage customers, storage must balance the requirements for large capacity, high performance, and low cost. A well studied technique is to place a solid state drive (SSD) cache in front of hard disk drive (HDD) storage, which can achieve much of the performance benefit of SSDs and the cost per gigabyte efficiency of HDDs. To further lower the cost of SSD caches and increase effective capacity, we propose the addition of data reduction techniques. Our cache architecture, called Nitro, has three main contributions: (1) an SSD cache design with adjustable deduplication, compression, and large replacement units, (2) an evaluation of the trade-offs between data reduction, RAM requirements, SSD writes (reduced up to 53%, which improves lifespan), and storage performance, and (3) acceleration of two prototype storage systems with an increase in IOPS (up to 120%) and reduction of read response time (up to 55%) compared to an SSD cache without Nitro. Additional benefits of Nitro include improved random read performance, faster snapshot restore, and reduced writes to SSDs.",
"title": ""
},
{
"docid": "4f5b26ab2d8bd68953d473727f6f5589",
"text": "OBJECTIVE\nThe study assessed the impact of mindfulness training on occupational safety of hospital health care workers.\n\n\nMETHODS\nThe study used a randomized waitlist-controlled trial design to test the effect of an 8-week mindfulness-based stress reduction (MBSR) course on self-reported health care worker safety outcomes, measured at baseline, postintervention, and 6 months later.\n\n\nRESULTS\nTwenty-three hospital health care workers participated in the study (11 in immediate intervention group; 12 in waitlist control group). The MBSR training decreased workplace cognitive failures (F [1, 20] = 7.44, P = 0.013, (Equation is included in full-text article.)) and increased safety compliance behaviors (F [1, 20] = 7.79, P = 0.011, (Equation is included in full-text article.)) among hospital health care workers. Effects were stable 6 months following the training. The MBSR intervention did not significantly affect participants' promotion of safety in the workplace (F [1, 20] = 0.40, P = 0.54, (Equation is included in full-text article.)).\n\n\nCONCLUSIONS\nMindfulness training may potentially decrease occupational injuries of health care workers.",
"title": ""
},
{
"docid": "e73060d189e9a4f4fd7b93e1cab22955",
"text": "We have recently shown that deep Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform feed forward deep neural networks (DNNs) as acoustic models for speech recognition. More recently, we have shown that the performance of sequence trained context dependent (CD) hidden Markov model (HMM) acoustic models using such LSTM RNNs can be equaled by sequence trained phone models initialized with connectionist temporal classification (CTC). In this paper, we present techniques that further improve performance of LSTM RNN acoustic models for large vocabulary speech recognition. We show that frame stacking and reduced frame rate lead to more accurate models and faster decoding. CD phone modeling leads to further improvements. We also present initial results for LSTM RNN models outputting words directly.",
"title": ""
},
{
"docid": "ddce6163a3fe4283a39fb341649c0ded",
"text": "Apoptosis induced by TNF-receptor I (TNFR1) is thought to proceed via recruitment of the adaptor FADD and caspase-8 to the receptor complex. TNFR1 signaling is also known to activate the transcription factor NF-kappa B and promote survival. The mechanism by which this decision between cell death and survival is arbitrated is not clear. We report that TNFR1-induced apoptosis involves two sequential signaling complexes. The initial plasma membrane bound complex (complex I) consists of TNFR1, the adaptor TRADD, the kinase RIP1, and TRAF2 and rapidly signals activation of NF-kappa B. In a second step, TRADD and RIP1 associate with FADD and caspase-8, forming a cytoplasmic complex (complex II). When NF-kappa B is activated by complex I, complex II harbors the caspase-8 inhibitor FLIP(L) and the cell survives. Thus, TNFR1-mediated-signal transduction includes a checkpoint, resulting in cell death (via complex II) in instances where the initial signal (via complex I, NF-kappa B) fails to be activated.",
"title": ""
},
{
"docid": "cbce30ed2bbdcd25fb708394dff1b7b6",
"text": "Current syntactic accounts of English resultatives are based on the assumption that result XPs are predicated of underlying direct objects. This assumption has helped to explain the presence of reflexive pronouns with some intransitive verbs but not others and the apparent lack of result XPs predicated of subjects of transitive verbs. We present problems for and counterexamples to some of the basic assumptions of the syntactic approach, which undermine its explanatory power. We develop an alternative account that appeals to principles governing the well-formedness of event structure and the event structure-to-syntax mapping. This account covers the data on intransitive verbs and predicts the distribution of subject-predicated result XPs with transitive verbs.*",
"title": ""
}
] | scidocsrr |
63d4fbac01a3a6bd026ce119f8fa3e5e | Disparity and occlusion estimation in multiocular systems and their coding for the communication of multiview image sequences | [
{
"docid": "bbf581230ec60c2402651d51e3a37211",
"text": "The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.",
"title": ""
}
] | [
{
"docid": "db79c4fc00f18c3d7822c9f79d1a4a83",
"text": "We propose a new pipeline for optical flow computation, based on Deep Learning techniques. We suggest using a Siamese CNN to independently, and in parallel, compute the descriptors of both images. The learned descriptors are then compared efficiently using the L2 norm and do not require network processing of patch pairs. The success of the method is based on an innovative loss function that computes higher moments of the loss distributions for each training batch. Combined with an Approximate Nearest Neighbor patch matching method and a flow interpolation technique, state of the art performance is obtained on the most challenging and competitive optical flow benchmarks.",
"title": ""
},
{
"docid": "135ceae69b9953cf8fe989dcf8d3d0da",
"text": "Recent advances in development of Wireless Communication in Vehicular Adhoc Network (VANET) has provided emerging platform for industrialists and researchers. Vehicular adhoc networks are multihop networks with no fixed infrastructure. It comprises of moving vehicles communicating with each other. One of the main challenge in VANET is to route the data efficiently from source to destination. Designing an efficient routing protocol for VANET is tedious task. Also because of wireless medium it is vulnerable to several attacks. Since attacks mislead the network operations, security is mandatory for successful deployment of such technology. This survey paper gives brief overview of different routing protocols. Also attempt has been made to identify major security issues and challenges associated with different routing protocols. .",
"title": ""
},
{
"docid": "49d533bf41f18bc96c404bb9a8bd12ae",
"text": "A back-cavity shielded bow-tie antenna system working at 900MHz center frequency for ground-coupled GPR application is investigated numerically and experimentally in this paper. Bow-tie geometrical structure is modified for a compact design and back-cavity assembly. A layer of absorber is employed to overcome the back reflection by omni-directional radiation pattern of a bow-tie antenna in H-plane, thus increasing the SNR and improve the isolation between T and R antennas as well. The designed antenna system is applied to a prototype GPR system. Tested data shows that the back-cavity shielded antenna works satisfactorily in the 900MHz GPR system.",
"title": ""
},
{
"docid": "9ca71bbeb4643a6a347050002f1317f5",
"text": "In modern society, we are increasingly disconnected from natural light/dark cycles and beset by round-the-clock exposure to artificial light. Light has powerful effects on physical and mental health, in part via the circadian system, and thus the timing of light exposure dictates whether it is helpful or harmful. In their compelling paper, Obayashi et al. (Am J Epidemiol. 2018;187(3):427-434.) offer evidence that light at night can prospectively predict an elevated incidence of depressive symptoms in older adults. Strengths of the study include the longitudinal design and direct, objective assessment of light levels, as well as accounting for multiple plausible confounders during analyses. Follow-up studies should address the study's limitations, including reliance on a global self-report of sleep quality and a 2-night assessment of light exposure that may not reliably represent typical light exposure. In addition, experimental studies including physiological circadian measures will be necessary to determine whether the light effects on depression are mediated through the circadian system or are so-called \"direct\" effects of light. In any case, these exciting findings could inform novel approaches to preventing depressive disorders in older adults.",
"title": ""
},
{
"docid": "ea55fffd5ed53588ba874780d9c5083a",
"text": "Representation learning is a central challenge across a range of machine learning areas. In reinforcement learning, effective and functional representations have the potential to tremendously accelerate learning progress and solve more challenging problems. Most prior work on representation learning has focused on generative approaches, learning representations that capture all underlying factors of variation in the observation space in a more disentangled or well-ordered manner. In this paper, we instead aim to learn functionally salient representations: representations that are not necessarily complete in terms of capturing all factors of variation in the observation space, but rather aim to capture those factors of variation that are important for decision making – that are “actionable.” These representations are aware of the dynamics of the environment, and capture only the elements of the observation that are necessary for decision making rather than all factors of variation, without explicit reconstruction of the observation. We show how these representations can be useful to improve exploration for sparse reward problems, to enable long horizon hierarchical reinforcement learning, and as a state representation for learning policies for downstream tasks. We evaluate our method on a number of simulated environments, and compare it to prior methods for representation learning, exploration, and hierarchical reinforcement learning.",
"title": ""
},
{
"docid": "794ad922f93b85e2195b3c85665a8202",
"text": "The paper shows how to create a probabilistic graph for WordNet. A node is created for every word and phrase in WordNet. An edge between two nodes is labeled with the probability that a user that is interested in the source concept will also be interested in the destination concept. For example, an edge with weight 0.3 between \"canine\" and \"dog\" indicates that there is a 30% probability that a user who searches for \"canine\" will be interested in results that contain the word \"dog\". We refer to the graph as probabilistic because we enforce the constraint that the sum of the weights of all the edges that go out of a node add up to one. Structural (e.g., the word \"canine\" is a hypernym (i.e., kind of) of the word \"dog\") and textual (e.g., the word \"canine\" appears in the textual definition of the word \"dog\") data from WordNet is used to create a Markov logic network, that is, a set of first order formulas with probabilities. The Markov logic network is then used to compute the weights of the edges in the probabilistic graph. We experimentally validate the quality of the data in the probabilistic graph on two independent benchmarks: Miller and Charles and WordSimilarity-353.",
"title": ""
},
{
"docid": "976ae17105f83e45a177c81441da3afa",
"text": "In the Google Play store, an introduction page is associated with every mobile application (app) for users to acquire its details, including screenshots, description, reviews, etc. However, it remains a challenge to identify what items influence users most when downloading an app. To explore users’ perspective, we conduct a survey to inquire about this question. The results of survey suggest that the participants pay most attention to the app description which gives users a quick overview of the app. Although there exist some guidelines about how to write a good app description to attract more downloads, it is hard to define a high quality app description. Meanwhile, there is no tool to evaluate the quality of app description. In this paper, we employ the method of crowdsourcing to extract the attributes that affect the app descriptions’ quality. First, we download some app descriptions from Google Play, then invite some participants to rate their quality with the score from one (very poor) to five (very good). The participants are also requested to explain every score’s reasons. By analyzing the reasons, we extract the attributes that the participants consider important during evaluating the quality of app descriptions. Finally, we train the supervised learning models on a sample of 100 app descriptions. In our experiments, the support vector machine model obtains up to 62% accuracy. In addition, we find that the permission, the number of paragraphs and the average number of words in one feature play key roles in defining a good app description.",
"title": ""
},
{
"docid": "026c1338e3c487d69523d0f0990451a4",
"text": "This article reports the psychometric evaluation of the Pornography Consumption Inventory (PCI), which was developed to assess motivations for pornography use among hypersexual men. Initial factor structure and item analysis were conducted in a sample of men (N = 105) seeking to reduce their pornography consumption (Study 1), yielding a 4-factor solution. In a second sample of treatment-seeking hypersexual men (N = 107), the authors further investigated the properties of the PCI using confirmatory factor analytic procedures, reliability indices, and explored PCI associations with several other constructs to establish convergent and discriminant validity. These studies demonstrate psychometric evidence for the PCI items that measure tendencies of hypersexual men to use pornography (a) for sexual pleasure; (b) to escape, cope, or avoid uncomfortable emotional experiences or stress; (c) to satisfy sexual curiosity; and (d) to satisfy desires for excitement, novelty, and variety.",
"title": ""
},
{
"docid": "55ec472aaff49b328d2aaf0a001fd1f6",
"text": "The threat of hardware reverse engineering is a growing concern for a large number of applications. A main defense strategy against reverse engineering is hardware obfuscation. In this paper, we investigate physical obfuscation techniques, which perform alterations of circuit elements that are difficult or impossible for an adversary to observe. The examples of such stealthy manipulations are changes in the doping concentrations or dielectric manipulations. An attacker will, thus, extract a netlist, which does not correspond to the logic function of the device-under-attack. This approach of camouflaging has garnered recent attention in the literature. In this paper, we expound on this promising direction to conduct a systematic end-to-end study of the VLSI design process to find multiple ways to obfuscate a circuit for hardware security. This paper makes three major contributions. First, we provide a categorization of the available physical obfuscation techniques as it pertains to various design stages. There is a large and multidimensional design space for introducing obfuscated elements and mechanisms, and the proposed taxonomy is helpful for a systematic treatment. Second, we provide a review of the methods that have been proposed or in use. Third, we present recent and new device and logic-level techniques for design obfuscation. For each technique considered, we discuss feasibility of the approach and assess likelihood of its detection. Then we turn our focus to open research questions, and conclude with suggestions for future research directions.",
"title": ""
},
{
"docid": "b11c59f3b49c064b9e866fddd328d9e6",
"text": "A new class of compact in-line filters with pseudoelliptic responses is presented in this paper. The proposed filters employ a new type of mixed-mode resonator. Such a resonator consists of a cavity loaded with a suspended high permittivity dielectric puck, so that both cavity TE101 mode and dielectric TE01δ mode are exploited within the same volume. This structure realizes the transverse doublet topology and it is therefore capable of generating a transmission zero (TZ) that can be either located above or below the passband. Multiple mixedmode resonators can be used as basic building blocks to obtain higher order filters by cascading them through nonresonating nodes. These filters are capable of implementing TZs that are very close to the passband edges, thus realizing an extreme closein rejection. As a result of the dielectric loading, the proposed solution leads to a very compact structure with improved temperature stability. To validate the proposed class of filters, a second-order filter with 2.0% fractional bandwidth (FBW) and a fourth-order filter with 2.5% FBW have been designed and manufactured.",
"title": ""
},
{
"docid": "6dce88afec3456be343c6a477350aa49",
"text": "In order to capture rich language phenomena, neural machine translation models have to use a large vocabulary size, which requires high computing time and large memory usage. In this paper, we alleviate this issue by introducing a sentence-level or batch-level vocabulary, which is only a very small sub-set of the full output vocabulary. For each sentence or batch, we only predict the target words in its sentencelevel or batch-level vocabulary. Thus, we reduce both the computing time and the memory usage. Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a wordto-word translation model or a bilingual phrase library learned from a traditional machine translation model. Experimental results on the large-scale English-toFrench task show that our method achieves better translation performance by 1 BLEU point over the large vocabulary neural machine translation system of Jean et al. (2015).",
"title": ""
},
{
"docid": "122a27336317372a0d84ee353bb94a4b",
"text": "Recently, many advanced machine learning approaches have been proposed for coreference resolution; however, all of the discriminatively-trained models reason over mentions rather than entities. That is, they do not explicitly contain variables indicating the “canonical” values for each attribute of an entity (e.g., name, venue, title, etc.). This canonicalization step is typically implemented as a post-processing routine to coreference resolution prior to adding the extracted entity to a database. In this paper, we propose a discriminatively-trained model that jointly performs coreference resolution and canonicalization, enabling features over hypothesized entities. We validate our approach on two different coreference problems: newswire anaphora resolution and research paper citation matching, demonstrating improvements in both tasks and achieving an error reduction of up to 62% when compared to a method that reasons about mentions only.",
"title": ""
},
{
"docid": "36721d43d9aa484803af28a4a720ae21",
"text": "The recognition that nutrients have the ability to interact and modulate molecular mechanisms underlying an organism's physiological functions has prompted a revolution in the field of nutrition. Performing population-scaled epidemiological studies in the absence of genetic knowledge may result in erroneous scientific conclusions and misinformed nutritional recommendations. To circumvent such issues and more comprehensively probe the relationship between genes and diet, the field of nutrition has begun to capitalize on both the technologies and supporting analytical software brought forth in the post-genomic era. The creation of nutrigenomics and nutrigenetics, two fields with distinct approaches to elucidate the interaction between diet and genes but with a common ultimate goal to optimize health through the personalization of diet, provide powerful approaches to unravel the complex relationship between nutritional molecules, genetic polymorphisms, and the biological system as a whole. Reluctance to embrace these new fields exists primarily due to the fear that producing overwhelming quantities of biological data within the confines of a single study will submerge the original query; however, the current review aims to position nutrigenomics and nutrigenetics as the emerging faces of nutrition that, when considered with more classical approaches, will provide the necessary stepping stones to achieve the ambitious goal of optimizing an individual's health via nutritional intervention.",
"title": ""
},
{
"docid": "06a1d90991c5a9039c6758a66205e446",
"text": "In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression. We hypothesize that syntactic information helps in making such models more robust across domains. We propose two major changes to the model: using explicit syntactic features and introducing syntactic constraints through Integer Linear Programming (ILP). Our evaluation shows that the proposed model works better than the original model as well as a traditional non-neural-network-based model in a cross-domain setting.",
"title": ""
},
{
"docid": "f5a6dcd51ecf0dfbd1719e1eae8cbf71",
"text": "In this letter, the design of a compact high-power waveguide low-pass filter with low insertion loss, all-higher order mode suppression, and stopband rejection up to the third harmonic, intended for Ka-band satellite applications, is presented. The method is based on step-shaped bandstop elements separated by very short (ideally of zero length) waveguide sections easily manufactured by low-cost computer-controlled milling. Matching is achieved by short input/output networks based on stubs whose heights are optimized following classical approaches. The novel filter presents a reduction in insertion loss and a remarkable increase in the high-power handling capability when compared to the classical waffle-iron filter and alternative solutions previously proposed, while the out-of-band frequency behavior remains unaltered.",
"title": ""
},
{
"docid": "2b595cab271cac15ea165e46459d6923",
"text": "Autonomous Mobility On Demand (MOD) systems can utilize fleet management strategies in order to provide a high customer quality of service (QoS). Previous works on autonomous MOD systems have developed methods for rebalancing single capacity vehicles, where QoS is maintained through large fleet sizing. This work focuses on MOD systems utilizing a small number of vehicles, such as those found on a campus, where additional vehicles cannot be introduced as demand for rides increases. A predictive positioning method is presented for improving customer QoS by identifying key locations to position the fleet in order to minimize expected customer wait time. Ridesharing is introduced as a means for improving customer QoS as arrival rates increase. However, with ridesharing perceived QoS is dependent on an often unknown customer preference. To address this challenge, a customer ratings model, which learns customer preference from a 5-star rating, is developed and incorporated directly into a ridesharing algorithm. The predictive positioning and ridesharing methods are applied to simulation of a real-world campus MOD system. A combined predictive positioning and ridesharing approach is shown to reduce customer service times by up to 29%. and the customer ratings model is shown to provide the best overall MOD fleet management performance over a range of customer preferences.",
"title": ""
},
{
"docid": "cdb7380ca1a4b5a8059e3e4adc6b7ea2",
"text": "In this paper, tunable microstrip bandpass filters with two adjustable transmission poles and compensable coupling are proposed. The fundamental structure is based on a half-wavelength (λ/2) resonator with a center-tapped open-stub. Microwave varactors placed at various internal nodes separately adjust the filter's center frequency and bandwidth over a wide tuning range. The constant absolute bandwidth is achieved at different center frequencies by maintaining the distance between the in-band transmission poles. Meanwhile, the coupling strength could be compensable by tuning varactors that are side and embedding loaded in the parallel coupled microstrip lines (PCMLs). As a demonstrator, a second-order filter with seven tuning varactors is implemented and verified. A frequency range of 0.58-0.91 GHz with a 1-dB bandwidth tuning from 115 to 315 MHz (i.e., 12.6%-54.3% fractional bandwidth) is demonstrated. Specifically, the return loss of passbands with different operating center frequencies can be achieved with same level, i.e., about 13.1 and 11.6 dB for narrow and wide passband responses, respectively. To further verify the etch-tolerance characteristics of the proposed prototype filter, another second-order filter with nine tuning varactors is proposed and fabricated. The measured results exhibit that the tunable fitler with the embedded varactor-loaded PCML has less sensitivity to fabrication tolerances. Meanwhile, the passband return loss can be achieved with same level of 20 dB for narrow and wide passband responses, respectively.",
"title": ""
},
{
"docid": "249367e508f61804642ae37e27d70901",
"text": "For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.4%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.",
"title": ""
},
{
"docid": "80383246c35226231b4f136c6cc0019b",
"text": "How to automatically monitor wide critical open areas is a challenge to be addressed. Recent computer vision algorithms can be exploited to avoid the deployment of a large amount of expensive sensors. In this work, we propose our object tracking system which, combined with our recently developed anomaly detection system. can provide intelligence and protection for critical areas. In this work. we report two case studies: an international pier and a city parking lot. We acquire sequences to evaluate the effectiveness of the approach in challenging conditions. We report quantitative results for object counting, detection, parking analysis, and anomaly detection. Moreover, we report state-of-the-art results for statistical anomaly detection on a public dataset.",
"title": ""
},
{
"docid": "5031c9b3dfbe2bf2a07a4f1414f594e0",
"text": "BACKGROUND\nWe assessed the effects of a three-year national-level, ministry-led health information system (HIS) data quality intervention and identified associated health facility factors.\n\n\nMETHODS\nMonthly summary HIS data concordance between a gold standard data quality audit and routine HIS data was assessed in 26 health facilities in Sofala Province, Mozambique across four indicators (outpatient consults, institutional births, first antenatal care visits, and third dose of diphtheria, pertussis, and tetanus vaccination) and five levels of health system data aggregation (daily facility paper registers, monthly paper facility reports, monthly paper district reports, monthly electronic district reports, and monthly electronic provincial reports) through retrospective yearly audits conducted July-August 2010-2013. We used mixed-effects linear models to quantify changes in data quality over time and associated health system determinants.\n\n\nRESULTS\nMedian concordance increased from 56.3% during the baseline period (2009-2010) to 87.5% during 2012-2013. Concordance improved by 1.0% (confidence interval [CI]: 0.60, 1.5) per month during the intervention period of 2010-2011 and 1.6% (CI: 0.89, 2.2) per month from 2011-2012. No significant improvements were observed from 2009-2010 (during baseline period) or 2012-2013. Facilities with more technical staff (aβ: 0.71; CI: 0.14, 1.3), more first antenatal care visits (aβ: 3.3; CI: 0.43, 6.2), and fewer clinic beds (aβ: -0.94; CI: -1.7, -0.20) showed more improvements. Compared to facilities with no stock-outs, facilities with five essential drugs stocked out had 51.7% (CI: -64.8 -38.6) lower data concordance.\n\n\nCONCLUSIONS\nA data quality intervention was associated with significant improvements in health information system data concordance across public-sector health facilities in rural and urban Mozambique. Concordance was higher at those facilities with more human resources for health and was associated with fewer clinic-level stock-outs of essential medicines. Increased investments should be made in data audit and feedback activities alongside targeted efforts to improve HIS data in low- and middle-income countries.",
"title": ""
}
] | scidocsrr |
fec5a9f5e8e9adf4083b558236256656 | Green-lighting Movie Scripts : Revenue Forecasting and Risk Management | [
{
"docid": "f66854fd8e3f29ae8de75fc83d6e41f5",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] | [
{
"docid": "0de0093ab3720901d4704bfeb7be4093",
"text": "Big Data analytics can revolutionize the healthcare industry. It can improve operational efficiencies, help predict and plan responses to disease epidemics, improve the quality of monitoring of clinical trials, and optimize healthcare spending at all levels from patients to hospital systems to governments. This paper provides an overview of Big Data, applicability of it in healthcare, some of the work in progress and a future outlook on how Big Data analytics can improve overall quality in healthcare systems.",
"title": ""
},
{
"docid": "c68ec0f721c8d8bfa27a415ba10708cf",
"text": "Textures are widely used in modern computer graphics. Their size, however, is often a limiting factor. Considering the widespread adaptation of mobile virtual and augmented reality applications, efficient storage of textures has become an important factor.\n We present an approach to analyse textures of a given mesh and compute a new set of textures with the goal of improving storage efficiency and reducing memory requirements. During this process the texture coordinates of the mesh are updated as required. Textures are analysed based on the UV-coordinates of one or more meshes and deconstructed into per-triangle textures. These are further analysed to detect single coloured as well as identical per-triangle textures. Our approach aims to remove these redundancies in order to reduce the amount of memory required to store the texture data. After this analysis, the per-triangle textures are compiled into a new set of texture images of user defined size. Our algorithm aims to pack texture data as tightly as possible in order to reduce the memory requirements.",
"title": ""
},
{
"docid": "7874a6681c45d87345197245e1e054fe",
"text": "The continuous processing of streaming data has become an important aspect in many applications. Over the last years a variety of different streaming platforms has been developed and a number of open source frameworks is available for the implementation of streaming applications. In this report, we will survey the landscape of existing streaming platforms. Starting with an overview of the evolving developments in the recent past, we will discuss the requirements of modern streaming architectures and present the ways these are approached by the different frameworks.",
"title": ""
},
{
"docid": "8decac4ff789460595664a38e7527ed6",
"text": "Unit selection synthesis has shown itself to be capable of producing high quality natural sounding synthetic speech when constructed from large databases of well-recorded, well-labeled speech. However, the cost in time and expertise of building such voices is still too expensive and specialized to be able to build individual voices for everyone. The quality in unit selection synthesis is directly related to the quality and size of the database used. As we require our speech synthesizers to have more variation, style and emotion, for unit selection synthesis, much larger databases will be required. As an alternative, more recently we have started looking for parametric models for speech synthesis, that are still trained from databases of natural speech but are more robust to errors and allow for better modeling of variation. This paper presents the CLUSTERGEN synthesizer which is implemented within the Festival/FestVox voice building environment. As well as the basic technique, three methods of modeling dynamics in the signal are presented and compared: a simple point model, a basic trajectory model and a trajectory model with overlap and add.",
"title": ""
},
{
"docid": "c99e4708a72c08569c25423efbe67775",
"text": "Predicting the next activity of a running process is an important aspect of process management. Recently, artificial neural networks, so called deep-learning approaches, have been proposed to address this challenge. This demo paper describes a software application that applies the Tensorflow deep-learning framework to process prediction. The software application reads industry-standard XES files for training and presents the user with an easy-to-use graphical user interface for both training and prediction. The system provides several improvements over earlier work. This demo paper focuses on the software implementation and describes the architecture and user interface.",
"title": ""
},
{
"docid": "08ca7be2334de477905e8766c8612c8f",
"text": "a r t i c l e i n f o a b s t r a c t A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.",
"title": ""
},
{
"docid": "fb8e6eac761229fc8c12339fb68002ed",
"text": "Cerebrovascular disease results from any pathological process of the blood vessels supplying the brain. Stroke, characterised by its abrupt onset, is the third leading cause of death in humans. This rare condition in dogs is increasingly being recognised with the advent of advanced diagnostic imaging. Magnetic resonance imaging (MRI) is the first choice diagnostic tool for stroke, particularly using diffusion-weighted images and magnetic resonance angiography for ischaemic stroke and gradient echo sequences for haemorrhagic stroke. An underlying cause is not always identified in either humans or dogs. Underlying conditions that may be associated with canine stroke include hypothyroidism, neoplasia, sepsis, hypertension, parasites, vascular malformation and coagulopathy. Treatment is mainly supportive and recovery often occurs within a few weeks. The prognosis is usually good if no underlying disease is found.",
"title": ""
},
{
"docid": "66782c46d59dd9ef225e9f3ea0b47cfe",
"text": "Intraoperative vital signals convey a wealth of complex temporal information that can provide significant insights into a patient's physiological status during the surgery, as well as outcomes after the surgery. Our study involves the use of a deep recurrent neural network architecture to predict patient's outcomes after the surgery, as well as to predict the immediate changes in the intraoperative signals during the surgery. More specifically, we will use a Long Short-Term Memory (LSTM) model which is a gated deep recurrent neural network architecture. We have performed two experiments on a large intraoperative dataset of 12,036 surgeries containing information on 7 intraoperative signals including body temperature, respiratory rate, heart rate, diastolic blood pressure, systolic blood pressure, fraction of inspired O2 and end-tidal CO2. We first evaluated the capability of LSTM in predicting the immediate changes in intraoperative signals, and then we evaluated its performance on predicting each patient's length of stay outcome. Our experiments show the effectiveness of LSTM with promising results on both tasks compared to the traditional models.",
"title": ""
},
{
"docid": "4799b4aa7e936d88fef0bb1e1f95f401",
"text": "This article summarizes and reviews the literature on neonaticide, infanticide, and filicide. A literature review was conducted using the Medline database: the cue terms neonaticide, infanticide, and filicide were searched. One hundred-fifteen articles were reviewed; of these, 51 are cited in our article. We conclude that while infanticide dates back to the beginning of recorded history, little is known about what causes parents to murder their children. To this end, further research is needed to identify potential perpetrators and to prevent subsequent acts of child murder by a parent.",
"title": ""
},
{
"docid": "852c85ecbed639ea0bfe439f69fff337",
"text": "In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.",
"title": ""
},
{
"docid": "b752f0f474b8f275f09d446818647564",
"text": "n engl j med 377;15 nejm.org October 12, 2017 4. Aysola J, Tahirovic E, Troxel AB, et al. A randomized controlled trial of opt-in versus opt-out enrollment into a diabetes behavioral intervention. Am J Health Promot 2016 October 21 (Epub ahead of print). 5. Mehta SJ, Troxel AB, Marcus N, et al. Participation rates with opt-out enrollment in a remote monitoring intervention for patients with myocardial infarction. JAMA Cardiol 2016; 1: 847-8. DOI: 10.1056/NEJMp1707991",
"title": ""
},
{
"docid": "ec6c62f25c987446522b49840c4242d7",
"text": "Have you ever been in a sauna? If yes, according to our recent survey conducted on Amazon Mechanical Turk, people who go to saunas are more likely to know that Mike Stonebraker is not a character in “The Simpsons”. While this result clearly makes no sense, recently proposed tools to automatically suggest visualizations, correlations, or perform visual data exploration, significantly increase the chance that a user makes a false discovery like this one. In this paper, we first show how current tools mislead users to consider random fluctuations as significant discoveries. We then describe our vision and early results for QUDE, a new system for automatically controlling the various risk factors during the data exploration process.",
"title": ""
},
{
"docid": "c9582409212e6f9b194175845216b2b6",
"text": "Although the amygdala complex is a brain area critical for human behavior, knowledge of its subspecialization is primarily derived from experiments in animals. We here employed methods for large-scale data mining to perform a connectivity-derived parcellation of the human amygdala based on whole-brain coactivation patterns computed for each seed voxel. Voxels within the histologically defined human amygdala were clustered into distinct groups based on their brain-wide coactivation maps. Using this approach, connectivity-based parcellation divided the amygdala into three distinct clusters that are highly consistent with earlier microstructural distinctions. Meta-analytic connectivity modelling then revealed the derived clusters' brain-wide connectivity patterns, while meta-data profiling allowed their functional characterization. These analyses revealed that the amygdala's laterobasal nuclei group was associated with coordinating high-level sensory input, whereas its centromedial nuclei group was linked to mediating attentional, vegetative, and motor responses. The often-neglected superficial nuclei group emerged as particularly sensitive to olfactory and probably social information processing. The results of this model-free approach support the concordance of structural, connectional, and functional organization in the human amygdala and point to the importance of acknowledging the heterogeneity of this region in neuroimaging research.",
"title": ""
},
{
"docid": "2c7bafac9d4c4fedc43982bd53c99228",
"text": "One of the uniqueness of business is for firm to be customer focus. Study have shown that this could be achieved through blockchain technology in enhancing customer loyalty programs (Michael J. Casey 2015; John Ream et al 2016; Sean Dennis 2016; James O'Brien and Dave Montali, 2016; Peiguss 2012; Singh, Khan, 2012; and among others). Recent advances in block chain technology have provided the tools for marketing managers to create a new generation of being able to assess the level of control companies want to have over customer data and activities as well as security/privacy issues that always arise with every additional participant of the network While block chain technology is still in the early stages of adoption, it could prove valuable for loyalty rewards program providers. Hundreds of blockchain initiatives are already underway in various industries, particularly airline services, even though standardization is far from a reality. One attractive feature of loyalty rewards is that they are not core to business revenue and operations and companies willing to implement blockchain for customer loyalty programs benefit lower administrative costs, improved customer experiences, and increased user engagement (Michael J. Casey, 2015; James O'Brien and Dave Montali 2016; Peiguss 2012; Singh, Abstract: In today business world, companies have accelerated the use of Blockchain technology to enhance the brand recognition of their products and services. Company believes that the integration of Blockchain into the current business marketing strategy will enhance the growth of their products, and thus acting as a customer loyalty solution. The goal of this study is to obtain a deep understanding of the impact of blockchain technology in enhancing customer loyalty programs of airline business. To achieve the goal of the study, a contextualized and literature based research instrument was used to measure the application of the investigated “constructs”, and a survey was conducted to collect data from the sample population. A convenience sample of total (450) Questionnaires were distributed to customers, and managers of the surveyed airlines who could be reached by the researcher. 274 to airline customers/passengers, and the remaining 176 to managers in the various airlines researched. Questionnaires with instructions were hand-delivered to respondents. Out of the 397 completed questionnaires returned, 359 copies were found usable for the present study, resulting in an effective response rate of 79.7%. The respondents had different social, educational, and occupational backgrounds. The research instrument showed encouraging evidence of reliability and validity. Data were analyzed using descriptive statistics, percentages and ttest analysis. The findings clearly show that there is significant evidence that blockchain technology enhance customer loyalty programs of airline business. It was discovered that Usage of blockchain technology is emphasized by the surveyed airlines operators in Nigeria., the extent of effective usage of customer loyalty programs is related to blockchain technology, and that he level or extent of effective usage of blockchain technology does affect the achievement of customer loyalty program goals and objectives. Feedback from the research will assist to expand knowledge as to the usefulness of blockchain technology being a customer loyalty solution.",
"title": ""
},
{
"docid": "f65c3e60dbf409fa2c6e58046aad1e1c",
"text": "The gut microbiota is essential for the development and regulation of the immune system and the metabolism of the host. Germ-free animals have altered immunity with increased susceptibility to immunologic diseases and show metabolic alterations. Here, we focus on two of the major immune-mediated microbiota-influenced components that signal far beyond their local environment. First, the activation or suppression of the toll-like receptors (TLRs) by microbial signals can dictate the tone of the immune response, and they are implicated in regulation of the energy homeostasis. Second, we discuss the intestinal mucosal surface is an immunologic component that protects the host from pathogenic invasion, is tightly regulated with regard to its permeability and can influence the systemic energy balance. The short chain fatty acids are a group of molecules that can both modulate the intestinal barrier and escape the gut to influence systemic health. As modulators of the immune response, the microbiota-derived signals influence functions of distant organs and can change susceptibility to metabolic diseases.",
"title": ""
},
{
"docid": "a8e3fd9ddfdb1eaea980246489579812",
"text": "With modern computer graphics, we can generate enormous amounts of 3D scene data. It is now possible to capture high-quality 3D representations of large real-world environments. Large shape and scene databases, such as the Trimble 3D Warehouse, are publicly accessible and constantly growing. Unfortunately, while a great amount of 3D content exists, most of it is detached from the semantics and functionality of the objects it represents. In this paper, we present a method to establish a correlation between the geometry and the functionality of 3D environments. Using RGB-D sensors, we capture dense 3D reconstructions of real-world scenes, and observe and track people as they interact with the environment. With these observations, we train a classifier which can transfer interaction knowledge to unobserved 3D scenes. We predict a likelihood of a given action taking place over all locations in a 3D environment and refer to this representation as an action map over the scene. We demonstrate prediction of action maps in both 3D scans and virtual scenes. We evaluate our predictions against ground truth annotations by people, and present an approach for characterizing 3D scenes by functional similarity using action maps.",
"title": ""
},
{
"docid": "87e732240f00b112bf2bb44af0ff8ca1",
"text": "Spoken Dialogue Systems (SDS) are man-machine interfaces which use natural language as the medium of interaction. Dialogue corpora collection for the purpose of training and evaluating dialogue systems is an expensive process. User simulators aim at simulating human users in order to generate synthetic data. Existing methods for user simulation mainly focus on generating data with the same statistical consistency as in some reference dialogue corpus. This paper outlines a novel approach for user simulation based on Inverse Reinforcement Learning (IRL). The task of building the user simulator is perceived as a task of imitation learning.",
"title": ""
},
{
"docid": "32f6db1bf35da397cd61d744a789d49c",
"text": "Mushroom poisoning is the main cause of mortality in food poisoning incidents in China. Although some responsible mushroom species have been identified, some were identified inaccuratly. This study investigated and analyzed 102 mushroom poisoning cases in southern China from 1994 to 2012, which involved 852 patients and 183 deaths, with an overall mortality of 21.48 %. The results showed that 85.3 % of poisoning cases occurred from June to September, and involved 16 species of poisonous mushroom: Amanita species (A. fuliginea, A. exitialis, A. subjunquillea var. alba, A. cf. pseudoporphyria, A. kotohiraensis, A. neoovoidea, A. gymnopus), Galerina sulciceps, Psilocybe samuiensis, Russula subnigricans, R. senecis, R. japonica, Chlorophyllum molybdites, Paxillus involutus, Leucocoprinus cepaestipes and Pulveroboletus ravenelii. Six species (A. subjunquillea var. alba, A. cf. pseudoporphyria, A. gymnopus, R. japonica, Psilocybe samuiensis and Paxillus involutus) are reported for the first time in poisoning reports from China. Psilocybe samuiensis is a newly recorded species in China. The genus Amanita was responsible for 70.49 % of fatalities; the main lethal species were A. fuliginea and A. exitialis. Russula subnigricans caused 24.59 % of fatalities, and five species showed mortality >20 % (A. fuliginea, A. exitialis, A. subjunquillea var. alba, R. subnigricans and Paxillus involutus). Mushroom poisoning symptoms were classified from among the reported clinical symptoms. Seven types of mushroom poisoning symptoms were identified for clinical diagnosis and treatment in China, including gastroenteritis, acute liver failure, acute renal failure, psychoneurological disorder, hemolysis, rhabdomyolysis and photosensitive dermatitis.",
"title": ""
},
{
"docid": "a6fc1c70b4bab666d5d580214fa3e09f",
"text": "Software designs decay as systems, uses, and operational environments evolve. Decay can involve the design patterns used to structure a system. Classes that participate in design pattern realizations accumulate grime—non-pattern-related code. Design pattern realizations can also rot, when changes break the structural or functional integrity of a design pattern. Design pattern rot can prevent a pattern realization from fulfilling its responsibilities, and thus represents a fault. Grime buildup does not break the structural integrity of a pattern but can reduce system testability and adaptability. This research examined the extent to which software designs actually decay, rot, and accumulate grime by studying the aging of design patterns in three successful object-oriented systems. We generated UML models from the three implementations and employed a multiple case study methodology to analyze the evolution of the designs. We found no evidence of design pattern rot in these systems. However, we found considerable evidence of pattern decay due to grime. Dependencies between design pattern components increased without regard for pattern intent, reducing pattern modularity, and decreasing testability and adaptability. The study of decay and grime showed that the grime that builds up around design patterns is mostly due to increases in coupling.",
"title": ""
},
{
"docid": "998bf65b2e95db90eb9fab8e13b47ff6",
"text": "Recently, deep neural networks (DNNs) have been regarded as the state-of-the-art classification methods in a wide range of applications, especially in image classification. Despite the success, the huge number of parameters blocks its deployment to situations with light computing resources. Researchers resort to the redundancy in the weights of DNNs and attempt to find how fewer parameters can be chosen while preserving the accuracy at the same time. Although several promising results have been shown along this research line, most existing methods either fail to significantly compress a well-trained deep network or require a heavy fine-tuning process for the compressed network to regain the original performance. In this paper, we propose the Block Term networks (BT-nets) in which the commonly used fully-connected layers (FC-layers) are replaced with block term layers (BT-layers). In BT-layers, the inputs and the outputs are reshaped into two low-dimensional high-order tensors, then block-term decomposition is applied as tensor operators to connect them. We conduct extensive experiments on benchmark datasets to demonstrate that BT-layers can achieve a very large compression ratio on the number of parameters while preserving the representation power of the original FC-layers as much as possible. Specifically, we can get a higher performance while requiring fewer parameters compared with the tensor train method.",
"title": ""
}
] | scidocsrr |
f92af42a16e4b181d528f7067b0752f2 | PCA vs. ICA: A Comparison on the FERET Data Set | [
{
"docid": "8b948819efed14853dcfeeabdb28c1be",
"text": "We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989). The nonlinearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalization of principal components analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to 10 speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximization provides a unifying framework for problems in \"blind\" signal processing.",
"title": ""
},
{
"docid": "ffc36fa0dcc81a7f5ba9751eee9094d7",
"text": "The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution. Zusammenfassung Die Analyse unabhfingiger Komponenten (ICA) eines Vektors beruht auf der Suche nach einer linearen Transformation, die die statistische Abh~ingigkeit zwischen den Komponenten minimiert. Zur Definition geeigneter Such-Kriterien wird die Entwicklung gemeinsamer Information als Funktion von Kumulanten steigender Ordnung genutzt. Es wird ein effizienter Algorithmus vorgeschlagen, der die Berechnung der ICA ffir Datenmatrizen innerhalb einer polynomischen Zeit erlaubt. Das Konzept der ICA kann eigentlich als Erweiterung der 'Principal Component Analysis' (PCA) betrachtet werden, die nur die Unabh~ingigkeit bis zur zweiten Ordnung erzwingen kann und deshalb Richtungen definiert, die orthogonal sind. Potentielle Anwendungen der ICA beinhalten Daten-Analyse und Kompression, Bayes-Detektion, Quellenlokalisierung und blinde Identifikation und Entfaltung.",
"title": ""
}
] | [
{
"docid": "c252cca4122984aac411a01ce28777f7",
"text": "An image-based visual servo control is presented for an unmanned aerial vehicle (UAV) capable of stationary or quasi-stationary flight with the camera mounted onboard the vehicle. The target considered consists of a finite set of stationary and disjoint points lying in a plane. Control of the position and orientation dynamics is decoupled using a visual error based on spherical centroid data, along with estimations of the linear velocity and the gravitational inertial direction extracted from image features and an embedded inertial measurement unit. The visual error used compensates for poor conditioning of the image Jacobian matrix by introducing a nonhomogeneous gain term adapted to the visual sensitivity of the error measurements. A nonlinear controller, that ensures exponential convergence of the system considered, is derived for the full dynamics of the system using control Lyapunov function design techniques. Experimental results on a quadrotor UAV, developed by the French Atomic Energy Commission, demonstrate the robustness and performance of the proposed control strategy.",
"title": ""
},
{
"docid": "6d570aabfbf4f692fc36a0ef5151a469",
"text": "Background: Balance is a component of basic needs for daily activities and it plays an important role in static and dynamic activities. Core stabilization training is thought to improve balance, postural control, and reduce the risk of lower extremity injuries. The purpose of this study was to study the effect of core stabilizing program on balance in spastic diplegic cerebral palsy children. Subjects and Methods: Thirty diplegic cerebral palsy children from both sexes ranged in age from six to eight years participated in this study. They were assigned randomly into two groups of equal numbers, control group (A) children were received selective therapeutic exercises and study group (B) children were received selective therapeutic exercises plus core stabilizing program for eight weeks. Each patient of the two groups was evaluated before and after treatment by Biodex Balance System in laboratory of balance in faculty of physical therapy (antero posterior, medio lateral and overall stability). Patients in both groups received traditional physical therapy program for one hour per day and three sessions per week and group (B) were received core stabilizing program for eight weeks three times per week. Results: There was no significant difference between the two groups in all measured variables before wearing the orthosis (p>0.05), while there was significant difference when comparing pre and post mean values of all measured variables in each group (p<0.01). When comparing post mean values between both groups, the results revealed significant improvement in favor of group (B) (p<0.01). Conclusion: core stabilizing program is an effective therapeutic exercise to improve balance in diplegic cerebral palsy children.",
"title": ""
},
{
"docid": "ffdd14d8d74a996971284a8e5e950996",
"text": "Ten years on from a review in the twentieth issue of this journal, this contribution assess the direction research in the field of glucose sensing for diabetes is headed and various technologies to be seen in the future. The emphasis of this review was placed on the home blood glucose testing market. After an introduction to diabetes and glucose sensing, this review analyses state of the art and pipeline devices; in particular their user friendliness and technological advancement. This review complements conventional reviews based on scholarly published papers in journals.",
"title": ""
},
{
"docid": "51e0caf419babd61615e1537545e40e8",
"text": "Past research on automatic facial expression analysis has focused mostly on the recognition of prototypic expressions of discrete emotions rather than on the analysis of dynamic changes over time, although the importance of temporal dynamics of facial expressions for interpretation of the observed facial behavior has been acknowledged for over 20 years. For instance, it has been shown that the temporal dynamics of spontaneous and volitional smiles are fundamentally different from each other. In this work, we argue that the same holds for the temporal dynamics of brow actions and show that velocity, duration, and order of occurrence of brow actions are highly relevant parameters for distinguishing posed from spontaneous brow actions. The proposed system for discrimination between volitional and spontaneous brow actions is based on automatic detection of Action Units (AUs) and their temporal segments (onset, apex, offset) produced by movements of the eyebrows. For each temporal segment of an activated AU, we compute a number of mid-level feature parameters including the maximal intensity, duration, and order of occurrence. We use Gentle Boost to select the most important of these parameters. The selected parameters are used further to train Relevance Vector Machines to determine per temporal segment of an activated AU whether the action was displayed spontaneously or volitionally. Finally, a probabilistic decision function determines the class (spontaneous or posed) for the entire brow action. When tested on 189 samples taken from three different sets of spontaneous and volitional facial data, we attain a 90.7% correct recognition rate.",
"title": ""
},
{
"docid": "a48193a735485fa2bca35897bae54208",
"text": "Interest in and research on disgust has surged over the past few decades. The field, however, still lacks a coherent theoretical framework for understanding the evolved function or functions of disgust. Here we present such a framework, emphasizing 2 levels of analysis: that of evolved function and that of information processing. Although there is widespread agreement that disgust evolved to motivate the avoidance of contact with disease-causing organisms, there is no consensus about the functions disgust serves when evoked by acts unrelated to pathogen avoidance. Here we suggest that in addition to motivating pathogen avoidance, disgust evolved to regulate decisions in the domains of mate choice and morality. For each proposed evolved function, we posit distinct information processing systems that integrate function-relevant information and account for the trade-offs required of each disgust system. By refocusing the discussion of disgust on computational mechanisms, we recast prior theorizing on disgust into a framework that can generate new lines of empirical and theoretical inquiry.",
"title": ""
},
{
"docid": "050ca96de473a83108b5ac26f4ac4349",
"text": "The concept of graphene-based two-dimensional leaky-wave antenna (LWA), allowing both frequency tuning and beam steering in the terahertz band, is proposed in this paper. In its design, a graphene sheet is used as a tuning part of the high-impedance surface (HIS) that acts as the ground plane of such 2-D LWA. It is shown that, by adjusting the graphene conductivity, the reflection phase of the HIS can be altered effectively, thus controlling the resonant frequency of the 2-D LWA over a broad band. In addition, a flexible adjustment of its pointing direction can be achieved over a wide range, while keeping the operating frequency fixed. Transmission-line methods are used to accurately predict the antenna reconfigurable characteristics, which are further verified by means of commercial full-wave analysis tools.",
"title": ""
},
{
"docid": "909405e3c06f22273107cb70a40d88c6",
"text": "This paper reports a 6-bit 220-MS/s time-interleaving successive approximation register analog-to-digital converter (SAR ADC) for low-power low-cost CMOS integrated systems. The major concept of the design is based on the proposed set-and-down capacitor switching method in the DAC capacitor array. Compared to the conventional switching method, the average switching energy is reduced about 81%. At 220-MS/s sampling rate, the measured SNDR and SFDR are 32.62 dB and 48.96 dB respectively. The resultant ENOB is 5.13 bits. The total power consumption is 6.8 mW. Fabricated in TSMC 0.18-µm 1P5M Digital CMOS technology, the ADC only occupies 0.032 mm2 active area.",
"title": ""
},
{
"docid": "cb88333d7c90df778361318dd362e9cb",
"text": "1. All other texts on the mathematics of language are now obsolete. Therefore, instead of going on about what a wonderful job Partee, ter Meulen, and Wall (henceforth, PMW) have done in some ways (breadth of coverage, much better presentation of formal semantics than is usual in books on mathematics of language, etc.), I will leave the lily ungilded, and focus on some points where the book under review could be made far better than it actually is. 2. Perhaps my main complaint concerns the treatment of the connections between the mathematical methods and the linguistics. This whole question is dealt with rather unevenly, and this is reflected in the very structure of the book. The major topics covered, corresponding to the book's division into parts (which are then subdivided into chapters) are set theory, logic and formal systems, algebra, \"English as a formal language\" (this is the heading under which compositionality, lambda-abstraction, generalized quantifiers, and intensionality are discussed), and finally formal language and automata theory. Now, the \"English as a formal language\" part deals with a Montague-style treatment of this language, but it does not go into contemporary syntactic analyses of English, not even ones that are mathematically precise and firmly grounded in formal language theory. Having praised the book for its detailed discussion of the uses of formal semantics in linguistics, I must damn its cavalier treatment of the uses of formal syntax. Thus, there is no mention anywhere in it of generalized phrase structure grammar or X-bar syntax or almost anything else of relevance to modern syntactic theory. Likewise, although the section on set theory deals at some length with nondenumerable sets, there is no mention of the argument of Langendoen and Postal (1984) that NLs are not denumerable. Since this is perhaps the one place in the literature where set theory and linguistics meet, one does not have to be a fan of Langendoen and Postal to see that this topic should be broached. 3. Certain important theoretical topics, usually ones at the interface of mathematics and linguistics, are presented sketchily and even misleadingly; for example, the compositionality of formal semantics, the generative power of transformational grammar, the nonregularity and noncontext freeness of NLs, and (more generally) the question of what kinds of objects one can prove things about. Let us begin with the principle of compositionality (i.e., that \"the meaning of a complex expression is a function of the meanings of its parts and of the syntactic rules by which they are combined\"). PMW claim that \"construed broadly and vaguely",
"title": ""
},
{
"docid": "2fa2ada108af6a24ae296723cec5ae14",
"text": "We sought to determine if antenatal corticosteroid treatment administered prior to 24 weeks' gestation influences neonatal morbidity and mortality in extremely low-birth-weight infants. A retrospective review was performed of all singleton pregnancies treated with one complete course of antenatal corticosteroids prior to 24 weeks' gestation and delivered between 23(0)/(7) and 25(6)/(7) weeks. These infants were compared with similar gestational-age controls. There were no differences in gender, race, birth weight, and gestational age between the groups. Infants exposed to antenatal corticosteroids had lower mortality (29.3% versus 62.9%, P = 0.001) and grade 3 or 4 intraventricular hemorrhage (IVH; 16.7% versus 36%, P < 0.05; relative risk [RR]: 2.16). Grade 3 and 4 IVH was associated with significantly lower survival probability as compared with no IVH or grade 1 and 2 IVH (P < 0.001, RR: 10.6, 95% confidence interval [CI]: 4.4 to 25.6). Antenatal steroid exposure was associated with a 62% decrease in the hazard rate compare with those who did not receive antenatal steroids after adjusting for IVH grade (Cox proportional hazard model, hazard ratio 0.38, 95% CI: 0.152 to 0.957, P = 0.04). The rates of premature rupture of membranes and chorioamnionitis were higher for infants exposed to antenatal corticosteroids. Exposure to a single course of antenatal corticosteroids prior to 24 weeks' gestation was associated with reduction of the risk of severe IVH and neonatal mortality for extremely low-birth-weight infants.",
"title": ""
},
{
"docid": "905ba98c5d0a3ec39e06e9a14caa9016",
"text": "Dialogue topic tracking is a sequential labelling problem of recognizing the topic state at each time step in given dialogue sequences. This paper presents various artificial neural network models for dialogue topic tracking, including convolutional neural networks to account for semantics at each individual utterance, and recurrent neural networks to account for conversational contexts along multiple turns in the dialogue history. The experimental results demonstrate that our proposed models can significantly improve the tracking performances in human-human conversations.",
"title": ""
},
{
"docid": "50dd728b4157aefb7df35366f5822d0d",
"text": "This paper describes iDriver, an iPhone software to remote control “Spirit of Berlin”. “Spirit of Berlin” is a completely autonomous car developed by the Free University of Berlin which is capable of unmanned driving in urban areas. iDriver is an iPhone application sending control packets to the car in order to remote control its steering wheel, gas and brake pedal, gear shift and turn signals. Additionally, a video stream from two top-mounted cameras is broadcasted back to the iPhone.",
"title": ""
},
{
"docid": "1453350c8134ecfe272255b71e7707ad",
"text": "Program slicing is a viable method to restrict the focus of a task to specific sub-components of a program. Examples of applications include debugging, testing, program comprehension, restructuring, downsizing, and parallelization. This paper discusses different statement deletion based slicing methods, together with algorithms and applications to software engineering.",
"title": ""
},
{
"docid": "9fb9664eea84d3bc0f59f7c4714debc1",
"text": "International research has shown that users are complacent when it comes to smartphone security behaviour. This is contradictory, as users perceive data stored on the `smart' devices to be private and worth protecting. Traditionally less attention is paid to human factors compared to technical security controls (such as firewalls and antivirus), but there is a crucial need to analyse human aspects as technology alone cannot deliver complete security solutions. Increasing a user's knowledge can improve compliance with good security practices, but for trainers and educators to create meaningful security awareness materials they must have a thorough understanding of users' existing behaviours, misconceptions and general attitude towards smartphone security.",
"title": ""
},
{
"docid": "a5255efa61de43a3341473facb4be170",
"text": "Differentiation of 3T3-L1 preadipocytes can be induced by a 2-d treatment with a factor \"cocktail\" (DIM) containing the synthetic glucocorticoid dexamethasone (dex), insulin, the phosphodiesterase inhibitor methylisobutylxanthine (IBMX) and fetal bovine serum (FBS). We temporally uncoupled the activities of the four DIM components and found that treatment with dex for 48 h followed by IBMX treatment for 48 h was sufficient for adipogenesis, whereas treatment with IBMX followed by dex failed to induce significant differentiation. Similar results were obtained with C3H10T1/2 and primary mesenchymal stem cells. The 3T3-L1 adipocytes differentiated by sequential treatment with dex and IBMX displayed insulin sensitivity equivalent to DIM adipocytes, but had lower sensitivity to ISO-stimulated lipolysis and reduced triglyceride content. The nondifferentiating IBMX-then-dex treatment produced transient expression of adipogenic transcriptional regulatory factors C/EBPbeta and C/EBPdelta, and little induction of terminal differentiation factors C/EBPalpha and PPARgamma. Moreover, the adipogenesis inhibitor preadipocyte factor-1 (Pref-1) was repressed by DIM or by dex-then-IBMX, but not by IBMX-then-dex treatment. We conclude that glucocorticoids drive preadipocytes to a novel intermediate cellular state, the dex-primed preadipocyte, during adipogenesis in cell culture, and that Pref-1 repression may be a cell fate determinant in preadipocytes.",
"title": ""
},
{
"docid": "67925645b590cba622dd101ed52cf9e2",
"text": "This study is the first to demonstrate that features of psychopathy can be reliably and validly detected by lay raters from \"thin slices\" (i.e., small samples) of behavior. Brief excerpts (5 s, 10 s, and 20 s) from interviews with 96 maximum-security inmates were presented in video or audio form or in both modalities combined. Forty raters used these excerpts to complete assessments of overall psychopathy and its Factor 1 and Factor 2 components, various personality disorders, violence proneness, and attractiveness. Thin-slice ratings of psychopathy correlated moderately and significantly with psychopathy criterion measures, especially those related to interpersonal features of psychopathy, particularly in the 5- and 10-s excerpt conditions and in the video and combined channel conditions. These findings demonstrate that first impressions of psychopathy and related constructs, particularly those pertaining to interpersonal functioning, can be reasonably reliable and valid. They also raise intriguing questions regarding how individuals form first impressions and about the extent to which first impressions may influence the assessment of personality disorders. (PsycINFO Database Record (c) 2009 APA, all rights reserved).",
"title": ""
},
{
"docid": "39bc8559589f388bb6eca16a1b3b2e87",
"text": "This paper presents a method to learn a decision tree to quantitatively explain the logic of each prediction of a pretrained convolutional neural networks (CNNs). Our method boosts the following two aspects of network interpretability. 1) In the CNN, each filter in a high conv-layer must represent a specific object part, instead of describing mixed patterns without clear meanings. 2) People can explain each specific prediction made by the CNN at the semantic level using a decision tree, i.e. which filters (or object parts) are used for prediction and how much they contribute in the prediction. To conduct such a quantitative explanation of a CNN, our method learns explicit representations of object parts in high conv-layers of the CNN and mines potential decision modes memorized in fully-connected layers. The decision tree organizes these potential decision modes in a coarse-to-fine manner. Experiments have demonstrated the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "00ac09dab67200f6b9df78a480d6dbd8",
"text": "In this paper, a new three-phase current-fed push-pull DC-DC converter is proposed. This converter uses a high-frequency three-phase transformer that provides galvanic isolation between the power source and the load. The three active switches are connected to the same reference, which simplifies the gate drive circuitry. Reduction of the input current ripple and the output voltage ripple is achieved by means of an inductor and a capacitor, whose volumes are smaller than in equivalent single-phase topologies. The three-phase DC-DC conversion also helps in loss distribution, allowing the use of lower cost switches. These characteristics make this converter suitable for applications where low-voltage power sources are used and the associated currents are high, such as in fuel cells, photovoltaic arrays, and batteries. The theoretical analysis, a simplified design example, and the experimental results for a 1-kW prototype will be presented for two operation regions. The prototype was designed for a switching frequency of 40 kHz, an input voltage of 120 V, and an output voltage of 400 V.",
"title": ""
},
{
"docid": "52e75a2e3d34c1cef5e61c69e074caf2",
"text": "In this paper, we propose an efficient method for license plate localization in the images with various situations and complex background. At the first, in order to reduce problems such as low quality and low contrast in the vehicle images, image contrast is enhanced by the two different methods and the best for following is selected. At the second part, vertical edges of the enhanced image are extracted by sobel mask. Then the most of the noise and background edges are removed by an effective algorithm. The output of this stage is given to a morphological filtering to extract the candidate regions and finally we use several geometrical features such as area of the regions, aspect ratio and edge density to eliminate the non-plate regions and segment the plate from the input car image. This method is performed on some real images that have been captured at the different imaging conditions. The appropriate experimental results show that our proposed method is nearly independent to environmental conditions such as lightening, camera angles and camera distance from the automobile, and license plate rotation.",
"title": ""
},
{
"docid": "7bf3adb52e9f2c40d419872f82429a06",
"text": "OBJECTIVES\nWe examine recent published research on the extraction of information from textual documents in the Electronic Health Record (EHR).\n\n\nMETHODS\nLiterature review of the research published after 1995, based on PubMed, conference proceedings, and the ACM Digital Library, as well as on relevant publications referenced in papers already included.\n\n\nRESULTS\n174 publications were selected and are discussed in this review in terms of methods used, pre-processing of textual documents, contextual features detection and analysis, extraction of information in general, extraction of codes and of information for decision-support and enrichment of the EHR, information extraction for surveillance, research, automated terminology management, and data mining, and de-identification of clinical text.\n\n\nCONCLUSIONS\nPerformance of information extraction systems with clinical text has improved since the last systematic review in 1995, but they are still rarely applied outside of the laboratory they have been developed in. Competitive challenges for information extraction from clinical text, along with the availability of annotated clinical text corpora, and further improvements in system performance are important factors to stimulate advances in this field and to increase the acceptance and usage of these systems in concrete clinical and biomedical research contexts.",
"title": ""
}
] | scidocsrr |
76ed18681f0b79466975597be0c2545e | Cannabinoid signaling and liver therapeutics. | [
{
"docid": "7e2bbd260e58d84a4be8b721cdf51244",
"text": "Obesity is characterised by altered gut microbiota, low-grade inflammation and increased endocannabinoid (eCB) system tone; however, a clear connection between gut microbiota and eCB signalling has yet to be confirmed. Here, we report that gut microbiota modulate the intestinal eCB system tone, which in turn regulates gut permeability and plasma lipopolysaccharide (LPS) levels. The impact of the increased plasma LPS levels and eCB system tone found in obesity on adipose tissue metabolism (e.g. differentiation and lipogenesis) remains unknown. By interfering with the eCB system using CB(1) agonist and antagonist in lean and obese mouse models, we found that the eCB system controls gut permeability and adipogenesis. We also show that LPS acts as a master switch to control adipose tissue metabolism both in vivo and ex vivo by blocking cannabinoid-driven adipogenesis. These data indicate that gut microbiota determine adipose tissue physiology through LPS-eCB system regulatory loops and may have critical functions in adipose tissue plasticity during obesity.",
"title": ""
}
] | [
{
"docid": "b357803105e6558f32061bdef0b0d6c3",
"text": "We present a modular controller for quadruped locomotion over unperceived rough terrain. Our approach is based on a computational Central Pattern Generator (CPG) model implemented as coupled nonlinear oscillators. Stumbling correction reflex is implemented as a sensory feedback mechanism affecting the CPG. We augment the outputs of the CPG with virtual model control torques responsible for posture control. The control strategy is validated on a 3D forward dynamics simulated quadruped robot platform of about the size and weight of a cat. To demonstrate the capabilities of the proposed approach, we perform locomotion over unperceived uneven terrain and slopes, as well as situations facing external pushes.",
"title": ""
},
{
"docid": "2bb39c3428116cef1f60cd1c5d36613e",
"text": "Digital video signal is widely used in modern society. There is increasing demand for it to be more secure and highly reliable. Focusing on this, we propose a method of detecting mosaic blocks. Our proposed method combines two algorithms: HOG with SVM classifier and template matching. We also consider characteristics of mosaic blocks other than shape. Experimental results show that our proposed method has high detection performance of mosaic blocks.",
"title": ""
},
{
"docid": "e498e5f0b1174e465dbef8747545f5a7",
"text": "We propose a novel architecture for k-shot classification on the Omniglot dataset. Building on prototypical networks, we extend their architecture to what we call Gaussian prototypical networks. Prototypical networks learn a map between images and embedding vectors, and use their clustering for classification. In our model, a part of the encoder output is interpreted as a confidence region estimate about the embedding point, and expressed as a Gaussian covariance matrix. Our network then constructs a direction and class dependent distance metric on the embedding space, using uncertainties of individual data points as weights. We show that Gaussian prototypical networks are a preferred architecture over vanilla prototypical networks with an equivalent number of parameters. We report state-ofthe-art performance in 1-shot and 5-shot classification both in 5-way and 20-way regime (for 5-shot 5-way, we are comparable to previous state-of-the-art) on the Omniglot dataset. We explore artificially down-sampling a fraction of images in the training set, which improves our performance even further. We therefore hypothesize that Gaussian prototypical networks might perform better in less homogeneous, noisier datasets, which are commonplace in real world applications.",
"title": ""
},
{
"docid": "96a79bc015e34db18e32a31bfaaace36",
"text": "We consider social media as a promising tool for public health, focusing on the use of Twitter posts to build predictive models about the forthcoming influence of childbirth on the behavior and mood of new mothers. Using Twitter posts, we quantify postpartum changes in 376 mothers along dimensions of social engagement, emotion, social network, and linguistic style. We then construct statistical models from a training set of observations of these measures before and after the reported childbirth, to forecast significant postpartum changes in mothers. The predictive models can classify mothers who will change significantly following childbirth with an accuracy of 71%, using observations about their prenatal behavior, and as accurately as 80-83% when additionally leveraging the initial 2-3 weeks of postnatal data. The study is motivated by the opportunity to use social media to identify mothers at risk of postpartum depression, an underreported health concern among large populations, and to inform the design of low-cost, privacy-sensitive early-warning systems and intervention programs aimed at promoting wellness postpartum.",
"title": ""
},
{
"docid": "4163070f45dd4d252a21506b1abcfff4",
"text": "Nowadays, security solutions are mainly focused on providing security defences, instead of solving one of the main reasons for security problems that refers to an appropriate Information Systems (IS) design. In fact, requirements engineering often neglects enough attention to security concerns. In this paper it will be presented a case study of our proposal, called SREP (Security Requirements Engineering Process), which is a standard-centred process and a reuse-based approach which deals with the security requirements at the earlier stages of software development in a systematic and intuitive way by providing a security resources repository and by integrating the Common Criteria into the software development lifecycle. In brief, a case study is shown in this paper demonstrating how the security requirements for a security critical IS can be obtained in a guided and systematic way by applying SREP.",
"title": ""
},
{
"docid": "676540e4b0ce65a71e86bf346f639f22",
"text": "Methylation is a prevalent posttranscriptional modification of RNAs. However, whether mammalian microRNAs are methylated is unknown. Here, we show that the tRNA methyltransferase NSun2 methylates primary (pri-miR-125b), precursor (pre-miR-125b), and mature microRNA 125b (miR-125b) in vitro and in vivo. Methylation by NSun2 inhibits the processing of pri-miR-125b2 into pre-miR-125b2, decreases the cleavage of pre-miR-125b2 into miR-125, and attenuates the recruitment of RISC by miR-125, thereby repressing the function of miR-125b in silencing gene expression. Our results highlight the impact of miR-125b function via methylation by NSun2.",
"title": ""
},
{
"docid": "d984489b4b71eabe39ed79fac9cf27a1",
"text": "Remote sensing from airborne and spaceborne platforms provides valuable data for mapping, environmental monitoring, disaster management and civil and military intelligence. However, to explore the full value of these data, the appropriate information has to be extracted and presented in standard format to import it into geo-information systems and thus allow efficient decision processes. The object-oriented approach can contribute to powerful automatic and semiautomatic analysis for most remote sensing applications. Synergetic use to pixel-based or statistical signal processing methods explores the rich information contents. Here, we explain principal strategies of object-oriented analysis, discuss how the combination with fuzzy methods allows implementing expert knowledge and describe a representative example for the proposed workflow from remote sensing imagery to GIS. The strategies are demonstrated using the first objectoriented image analysis software on the market, eCognition, which provides an appropriate link between remote sensing",
"title": ""
},
{
"docid": "05d8383eb6b1c6434f75849859c35fd0",
"text": "This paper proposes a robust approach for image based floor detection and segmentation from sequence of images or video. In contrast to many previous approaches, which uses a priori knowledge of the surroundings, our method uses combination of modified sparse optical flow and planar homography for ground plane detection which is then combined with graph based segmentation for extraction of floor from images. We also propose a probabilistic framework which makes our method adaptive to the changes in the surroundings. We tested our algorithm on several common indoor environment scenarios and were able to extract floor even under challenging circumstances. We obtained extremely satisfactory results in various practical scenarios such as where the floor and non floor areas are of same color, in presence of textured flooring, and where illumination changes are steep.",
"title": ""
},
{
"docid": "679759d8f8e4c4ef5a2bb1356a61d7f5",
"text": "This paper describes a method of implementing two factor authentication using mobile phones. The proposed method guarantees that authenticating to services, such as online banking or ATM machines, is done in a very secure manner. The proposed system involves using a mobile phone as a software token for One Time Password generation. The generated One Time Password is valid for only a short user-defined period of time and is generated by factors that are unique to both, the user and the mobile device itself. Additionally, an SMS-based mechanism is implemented as both a backup mechanism for retrieving the password and as a possible mean of synchronization. The proposed method has been implemented and tested. Initial results show the success of the proposed method.",
"title": ""
},
{
"docid": "c741867c7d29026da910c52be073942d",
"text": "In this report we summarize the results of the SemEval 2016 Task 8: Meaning Representation Parsing. Participants were asked to generate Abstract Meaning Representation (AMR) (Banarescu et al., 2013) graphs for a set of English sentences in the news and discussion forum domains. Eleven sites submitted valid systems. The availability of state-of-the-art baseline systems was a key factor in lowering the bar to entry; many submissions relied on CAMR (Wang et al., 2015b; Wang et al., 2015a) as a baseline system and added extensions to it to improve scores. The evaluation set was quite difficult to parse, particularly due to creative approaches to word representation in the web forum portion. The top scoring systems scored 0.62 F1 according to the Smatch (Cai and Knight, 2013) evaluation heuristic. We show some sample sentences along with a comparison of system parses and perform quantitative ablative studies.",
"title": ""
},
{
"docid": "429c900f6ac66bcea5aa068d27f5b99f",
"text": "Recent researches shows that Brain Computer Interface (BCI) technology provides effective way of communication between human and physical device. In this work, an EEG based wireless mobile robot is implemented for people suffer from motor disabilities can interact with physical devices based on Brain Computer Interface (BCI). An experimental model of mobile robot is explored and it can be controlled by human eye blink strength. EEG signals are acquired from NeuroSky Mind wave Sensor (single channel prototype) in non-invasive manner and Signal features are extracted by adopting Discrete Wavelet Transform (DWT) to amend the signal resolution. We analyze and compare the db4 and db7 wavelets for accurate classification of blink signals. Different classes of movements are achieved based on different blink strength of user. The experimental setup of adaptive human machine interface system provides better accuracy and navigates the mobile robot based on user command, so it can be adaptable for disabled people.",
"title": ""
},
{
"docid": "4839938502248899c8adc9b6ef359c52",
"text": "This paper introduces an overview and positioning of the contemporary brand experience in the digital context. With technological advances in games, gamification and emerging technologies, such as Virtual Reality (VR) and Artificial Intelligence (AI), it is possible that brand experiences are getting more pervasive and seamless. In this paper, we review the current theories around multi-sensory brand experience and the role of new technologies in the whole consumer journey, including pre-purchase, purchase and post-purchase stages. After this analysis, we introduce a conceptual framework that promotes a continuous loop of consumer experience and engagement from different and new touch points, which could be augmented by games, gamification and emerging technologies. Based on the framework, we conclude this paper with propositions, examples and recommendations for future research in contemporary brand management, which could help brand managers and designers to deal with technological challenges posed by the contemporary society.",
"title": ""
},
{
"docid": "0cc25de8ea70fe1fd85824e8f3155bf7",
"text": "When integrating information from multiple websites, the same data objects can exist in inconsistent text formats across sites, making it difficult to identify matching objects using exact text match. We have developed an object identification system called Active Atlas, which compares the objects’ shared attributes in order to identify matching objects. Certain attributes are more important for deciding if a mapping should exist between two objects. Previous methods of object identification have required manual construction of object identification rules or mapping rules for determining the mappings between objects. This manual process is time consuming and error-prone. In our approach, Active Atlas learns to tailor mapping rules, through limited user input, to a specific application domain. The experimental results demonstrate that we achieve higher accuracy and require less user involvement than previous methods across various application domains.",
"title": ""
},
{
"docid": "e9768df1b2a679e7d9e81588d4c2af02",
"text": "Over the last few decades, the electric utilities have seen a very significant increase in the application of metal oxide surge arresters on transmission lines in an effort to reduce lightning initiated flashovers, maintain high power quality and to avoid damages and disturbances especially in areas with high soil resistivity and lightning ground flash density. For economical insulation coordination in transmission and substation equipment, it is necessary to predict accurately the lightning surge overvoltages that occur on an electric power system.",
"title": ""
},
{
"docid": "238aac56366875b1714284d3d963fe9b",
"text": "We construct a general-purpose multi-input functional encryption scheme in the private-key setting. Namely, we construct a scheme where a functional key corresponding to a function f enables a user holding encryptions of $$x_1, \\ldots , x_t$$ x1,…,xt to compute $$f(x_1, \\ldots , x_t)$$ f(x1,…,xt) but nothing else. This is achieved starting from any general-purpose private-key single-input scheme (without any additional assumptions) and is proven to be adaptively secure for any constant number of inputs t. Moreover, it can be extended to a super-constant number of inputs assuming that the underlying single-input scheme is sub-exponentially secure. Instantiating our construction with existing single-input schemes, we obtain multi-input schemes that are based on a variety of assumptions (such as indistinguishability obfuscation, multilinear maps, learning with errors, and even one-way functions), offering various trade-offs between security assumptions and functionality. Previous and concurrent constructions of multi-input functional encryption schemes either rely on stronger assumptions and provided weaker security guarantees (Goldwasser et al. in Advances in cryptology—EUROCRYPT, 2014; Ananth and Jain in Advances in cryptology—CRYPTO, 2015), or relied on multilinear maps and could be proven secure only in an idealized generic model (Boneh et al. in Advances in cryptology—EUROCRYPT, 2015). In comparison, we present a general transformation that simultaneously relies on weaker assumptions and guarantees stronger security.",
"title": ""
},
{
"docid": "d4aaea0107cbebd7896f4cb57fa39c05",
"text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs",
"title": ""
},
{
"docid": "6e2d7dae0891a2f3a8f02fdb81af9dc6",
"text": "Wireless Sensor Networks (WSNs) are charac-terized by multi-hop wireless connectivity, frequently changing network topology and need for efficient routing protocols. The purpose of this paper is to evaluate performance of routing protocol DSDV in wireless sensor network (WSN) scales regarding the End-to-End delay and throughput PDR with mobility factor .Routing protocols are a critical aspect to performance in mobile wireless networks and play crucial role in determining network performance in terms of packet delivery fraction, end-to-end delay and packet loss. Destination-sequenced distance vector (DSDV) protocol is a proactive protocol depending on routing tables which are maintained at each node. The routing protocol should detect and maintain optimal route(s) between source and destination nodes. In this paper, we present application of DSDV in WSN as extend to our pervious study to the design and impleme-ntation the details of the DSDV routing protocol in MANET using the ns-2 network simulator.",
"title": ""
},
{
"docid": "b89259a915856b309a02e6e7aa6c957f",
"text": "The paper proposes a comprehensive information security maturity model (ISMM) that addresses both technical and socio/non-technical security aspects. The model is intended for securing e-government services (implementation and service delivery) in an emerging and increasing security risk environment. The paper utilizes extensive literature review and survey study approaches. A total of eight existing ISMMs were selected and critically analyzed. Models were then categorized into security awareness, evaluation and management orientations. Based on the model’s strengths – three models were selected to undergo further analyses and then synthesized. Each of the three selected models was either from the security awareness, evaluation or management orientations category. To affirm the findings – a survey study was conducted into six government organizations located in Tanzania. The study was structured to a large extent by the security controls adopted from the Security By Consensus (SBC) model. Finally, an ISMM with five critical maturity levels was proposed. The maturity levels were: undefined, defined, managed, controlled and optimized. The papers main contribution is the proposed model that addresses both technical and non-technical security services within the critical maturity levels. Additionally, the paper enhances awareness and understanding on the needs for security in e-government services to stakeholders.",
"title": ""
},
{
"docid": "a5cd94446abfc46c6d5c4e4e376f1e0a",
"text": "Commitment problem in credit market and its eãects on economic growth are discussed. Completions of investment projects increase capital stock of the economy. These projects require credits which are ånanced by ånacial intermediaries. A simpliåed credit model of Dewatripont and Maskin is used to describe the ånancing process, in which the commitment problem or the \\soft budget constraint\" problem arises. However, in dynamic general equilibrium setup with endougenous determination of value and cost of projects, there arise multiple equilibria in the project ånancing model, namely reånancing equilirium and no-reånancing equilibrium. The former leads the economy to the stationary state with smaller capital stock level than the latter. Both the elimination of reånancing equilibrium and the possibility of \\Animal Spirits Cycles\" equilibrium are also discussed.",
"title": ""
},
{
"docid": "fa51c71a66a8348dae241272a71b27e2",
"text": "Achieving balance between convergence and diversity is a key issue in evolutionary multiobjective optimization. Most existing methodologies, which have demonstrated their niche on various practical problems involving two and three objectives, face significant challenges in many-objective optimization. This paper suggests a unified paradigm, which combines dominance- and decomposition-based approaches, for many-objective optimization. Our major purpose is to exploit the merits of both dominance- and decomposition-based approaches to balance the convergence and diversity of the evolutionary process. The performance of our proposed method is validated and compared with four state-of-the-art algorithms on a number of unconstrained benchmark problems with up to 15 objectives. Empirical results fully demonstrate the superiority of our proposed method on all considered test instances. In addition, we extend this method to solve constrained problems having a large number of objectives. Compared to two other recently proposed constrained optimizers, our proposed method shows highly competitive performance on all the constrained optimization problems.",
"title": ""
}
] | scidocsrr |
49cfc1193997985c8b7c247f67287fc6 | Forecasting daily lake levels using artificial intelligence approaches | [
{
"docid": "00b8207e783aed442fc56f7b350307f6",
"text": "A mathematical tool to build a fuzzy model of a system where fuzzy implications and reasoning are used is presented. The premise of an implication is the description of fuzzy subspace of inputs and its consequence is a linear input-output relation. The method of identification of a system using its input-output data is then shown. Two applications of the method to industrial processes are also discussed: a water cleaning process and a converter in a steel-making process.",
"title": ""
}
] | [
{
"docid": "a7b9505a029e58531f250c5728dbeef4",
"text": "This paper proposes an object recognition approach intended for extracting, analyzing and clustering of features from RGB image views from given objects. Extracted features are matched with features in learned object models and clustered in Hough-space to find a consistent object pose. Hypotheses for valid poses are verified by computing a homography from detected features. Using that homography features are back projected onto the input image and the resulting area is checked for possible presence of other objects. This approach is applied by our team homer[at]UniKoblenz in the RoboCup[at]Home league. Besides the proposed framework, this work offers the computer vision community with online programs available as open source software.",
"title": ""
},
{
"docid": "43044459a273dafa29dccdfc0cf90734",
"text": "The principles and practices that guide the design and development of test items are changing because our assessment practices are changing. Educational visionary Randy Bennett (2001) anticipated that computers and the Internet would become two of the most powerful forces of change in educational measurement. Bennett’s premonition was spot-on. Internet-based computerized testing has dramatically changed educational measurement because test administration procedures combined with the growing popularity of digital media and the explosion in Internet use have created the foundation for different types of tests and test items. As a result, many educational tests that were once given in a paper format are now administered by computer using the Internet. Many common and wellknown exams in the domain of certification and licensure testing can be cited as examples, including the Graduate Management Achievement Test (GMAT), the Graduate Record Exam (GRE), the Test of English as a Foreign Language (TOEFL iBT), the American Institute of Certified Public Accountants Uniform CPA examination (CBT-e), the Medical Council of Canada Qualifying Exam Part I (MCCQE I), the National Council Licensure Examination for Registered Nurses (NCLEX-RN) and the National Council Licensure Examination for Practical Nurses (NCLEX-PN). This rapid transition to computerized testing is also occurring in K–12 education. As early as 2009, Education Week’s “Technology Counts” reported that educators in more than half of the U.S. states—where 49 of the 50 states at that time had educational achievement testing—administer some form of computerized testing. The move toward Common Core State Standards will only accelerate this transition given that the two largest consortiums, PARCC and SMARTER Balance, are using technology to develop and deliver computerized tests and to design constructed-response items and performance-based tasks that will be scored using computer algorithms. Computerized testing offers many advantages to examinees and examiners compared to more traditional paper-based tests. For instance, computers support the development of technology-enhanced item types that allow examiners to use more diverse item formats and measure a broader range of knowledge and skills. Computer algorithms can also be developed so these new item types are scored automatically and with limited human intervention, thereby eliminating the need for costly and timeconsuming marking and scoring sessions. Because items are scored immediately, examinees receive instant feedback on their strengths and weaknesses. Computerized tests also permit continuous and on-demand administration, thereby allowing examinees to have more choice about where and when they write their exams. But the advent of computerized testing has also raised new challenges, particularly in the area of item development. Large numbers of items are needed to support the banks necessary for computerized 21 AUTOMATIC ITEM GENERATION",
"title": ""
},
{
"docid": "aa2b1a8d0cf511d5862f56b47d19bc6a",
"text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:",
"title": ""
},
{
"docid": "d057eece8018a905fe1642a1f40de594",
"text": "6 Abstract— Removal of noise from the original signal is still a bottleneck for researchers. There are several methods and techniques published and each method has its own advantages, disadvantages and assumptions. This paper presents a review of some significant work in the field of Image Denoising.The brief introduction of some popular approaches is provided and discussed. Insights and potential future trends are also discussed",
"title": ""
},
{
"docid": "1593fd6f9492adc851c709e3dd9b3c5f",
"text": "This paper addresses the problem of extracting keyphrases from scientific articles and categorizing them as corresponding to a task, process, or material. We cast the problem as sequence tagging and introduce semi-supervised methods to a neural tagging model, which builds on recent advances in named entity recognition. Since annotated training data is scarce in this domain, we introduce a graph-based semi-supervised algorithm together with a data selection scheme to leverage unannotated articles. Both inductive and transductive semi-supervised learning strategies outperform state-of-the-art information extraction performance on the 2017 SemEval Task 10 ScienceIE task.",
"title": ""
},
{
"docid": "48be442dfe31fbbbefb6fbf0833112fb",
"text": "When documents and queries are presented in different languages, the common approach is to translate the query into the document language. While there are a variety of query translation approaches, recent research suggests that combining multiple methods into a single ”structured query” is the most effective. In this paper, we introduce a novel approach for producing a unique combination recipe for each query, as it has also been shown that the optimal combination weights differ substantially across queries and other task specifics. Our query-specific combination method generates statistically significant improvements over other combination strategies presented in the literature, such as uniform and task-specific weighting. An in-depth empirical analysis presents insights about the effect of data size, domain differences, labeling and tuning on the end performance of our approach.",
"title": ""
},
{
"docid": "9f11eb476ab0ae5a353fb0279ea4697d",
"text": "This paper presents a relaxation oscillator that utilizes a supply-stabilized pico-powered voltage and current reference (VCRG) to charge and reset a chopped pair of MIM capacitors at sub-nW power levels. Specifically, a temperature- and line-stabilized reference voltage is generated via a 4-transistor (4T) self-regulated structure, the output of which is used to bias a temperature-compensated gate-leakage transistor to generate a stabilized current reference. The reference current is then used to charge a swapping pair of MIM capacitors to compare to the voltage generated by the same VCRG in a relaxation topology. The design is fabricated in 65 nm CMOS, and 14 measured samples yield a reference voltage of 147.1 mV achieving a temperature coefficient of 364 ppm/°C and a line regulation of 0.21%/V, and a reference current of 10.2 pA achieving a temperature coefficient of 1077.3 ppm/°C and a line regulation of 1.79%/V (all numbers averaged across all samples). The proposed VCRG-based relaxation oscillator achieves an average temperature coefficient of 999.9 ppm/°C from −40 to 120° C and a line regulation of 1.6%/V from 0.6 to 1.1 V, all at a system power consumption of 124.2 pW at 20° C.",
"title": ""
},
{
"docid": "d610f7d468fe2f28637f4aeb95948cd6",
"text": "A computational model is described in which the sizes of variables are represented by the explicit times at which action potentials occur, rather than by the more usual 'firing rate' of neurons. The comparison of patterns over sets of analogue variables is done by a network using different delays for different information paths. This mode of computation explains how one scheme of neuroarchitecture can be used for very different sensory modalities and seemingly different computations. The oscillations and anatomy of the mammalian olfactory systems have a simple interpretation in terms of this representation, and relate to processing in the auditory system. Single-electrode recording would not detect such neural computing. Recognition 'units' in this style respond more like radial basis function units than elementary sigmoid units.",
"title": ""
},
{
"docid": "37f55e03f4d1ff3b9311e537dc7122b5",
"text": "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.",
"title": ""
},
{
"docid": "6b9a25385c44fcef85a0e1725f7ff0c2",
"text": "Placement of interior node points is a crucial step in the generation of quality meshes in sweeping algorithms. Two new algorithms were devised for node point placement and implemented in Sweep Tool, the first based on the use of linear transformations between bounding node loops and the second based on smoothing. Examples are given that demonstrate the effectiveness of these algorithms.",
"title": ""
},
{
"docid": "ca3ea61314d43abeac81546e66ff75e4",
"text": "OBJECTIVE\nTo describe and discuss the process used to write a narrative review of the literature for publication in a peer-reviewed journal. Publication of narrative overviews of the literature should be standardized to increase their objectivity.\n\n\nBACKGROUND\nIn the past decade numerous changes in research methodology pertaining to reviews of the literature have occurred. These changes necessitate authors of review articles to be familiar with current standards in the publication process.\n\n\nMETHODS\nNarrative overview of the literature synthesizing the findings of literature retrieved from searches of computerized databases, hand searches, and authoritative texts.\n\n\nDISCUSSION\nAn overview of the use of three types of reviews of the literature is presented. Step by step instructions for how to conduct and write a narrative overview utilizing a 'best-evidence synthesis' approach are discussed, starting with appropriate preparatory work and ending with how to create proper illustrations. Several resources for creating reviews of the literature are presented and a narrative overview critical appraisal worksheet is included. A bibliography of other useful reading is presented in an appendix.\n\n\nCONCLUSION\nNarrative overviews can be a valuable contribution to the literature if prepared properly. New and experienced authors wishing to write a narrative overview should find this article useful in constructing such a paper and carrying out the research process. It is hoped that this article will stimulate scholarly dialog amongst colleagues about this research design and other complex literature review methods.",
"title": ""
},
{
"docid": "4229efa8c62e28794bd2eae055eb1449",
"text": "The rapid growth of e-commerce is imposing profound impacts on modern society. On the supply side, the emergence of e-commerce is greatly changing the operation behavior of some retailers and is increasing product internationalization due to its geographically unlimited nature. On the demand side, the pervasiveness of e-commerce affects how, where, and when consumers shop, and indirectly influences the way in which we live our lives. However, the development of e-commerce is still in an early stage, and why consumers choose (or do not choose) online purchasing is far from being completely understood. To better evaluate and anticipate those profound impacts of e-commerce, therefore, it is important to further refine our understanding of consumers' e-shopping behavior. A number of studies have investigated e-shopping behavior, and reviewing them is valuable for further improving our understanding. This report aims to summarize previous e-shopping research in a systematic way. In this review, we are interested primarily in the potential benefits and costs that the internet offers for the business-to-consumer segment of e-commerce in the transaction (purchase) channel. An overview of the 65 empirical studies analyzed in this report is provided in the Appendix. Most previous studies fall into one or more of several theoretical frameworks, including the theory of reasoned action, the theory of planned behavior, the technology acceptance model, transaction cost theory, innovation diffusion theory, and others. Among them, social psychological theories (the theory of reasoned action, the theory of planned behavior, the technology acceptance model) were widely applied. As shown in the applications of different theories, e-shopping behavior is not a simple decision process, and thus an integration of various theories is necessary to deal with its complexities. We suggest synthesizing these theories through the development of a comprehensive list of benefits and costs, using each of the key constructs of the pertinent theories as a guide to identifying the nature of those benefits and costs. The dependent variables mainly include e-shopping intention and actual e-shopping behavior (a few studies used attitudes toward e-shopping). E-shopping intention was measured by various dimensions. Among them, the directly-stated intention to purchase online is the most frequently used measure. Although some studies used a unidimensional measure, most adopted a latent construct to assess consumers' e-shopping intentions. Actual e-shopping behavior mainly includes three dimensions: adoption, spending, and frequency. Most studies examined one or more of these three dimensions directly, while a few studies constructed a latent …",
"title": ""
},
{
"docid": "eb8bdb2a401f2a1233118e53430ac6c1",
"text": "The two main research branches in intelligent vehicles field are Advanced Driver Assistance Systems (ADAS) [1] and autonomous driving [2]. ADAS generally work on predefined enviroment and limited scenarios such as highway driving, low speed driving, night driving etc. In such situations this systems have sufficiently high performance and the main features that allow their large diffusion and that have enabled commercialization in this years are the low cost, the small size and the easy integration into the vehicle. Autonomous vehicle, on the other hand, should be ready to work over all-scenarios, all-terrain and all-wheather conditions, but nowadays autonomous vehicle are used in protected and structured enviroments or military applications [3], [4]. Generally many differences between ADAS and autonomous vehicles, both hardware and software features, are related on cost and integration: ADAS are embedded into vehicles and might be low cost; on the other hand usually are not heavy limitations on cost and integration related to autonomous vehicles. Obviosly, the main difference is the presence/absence of the driver. Otherwise, most of the undelying ideas are shared, such as perception, planning, actuation needed in this kind of systems.",
"title": ""
},
{
"docid": "6ebd75996b8a652720b23254c9d77be4",
"text": "This paper focuses on a biometric cryptosystem implementation and evaluation based on a number of fingerprint texture descriptors. The texture descriptors, namely, the Gabor filter-based FingerCode, a local binary pattern (LBP), and a local direction pattern (LDP), and their various combinations are considered. These fingerprint texture descriptors are binarized using a biometric discretization method and used in a fuzzy commitment scheme (FCS). We constructed the biometric cryptosystems, which achieve a good performance, by fusing discretized fingerprint texture descriptors and using effective error-correcting codes. We tested the proposed system on a FVC2000 DB2a fingerprint database, and the results demonstrate that the new system significantly improves the performance of the FCS for texture-based",
"title": ""
},
{
"docid": "cfa58ab168beb2d52fe6c2c47488e93a",
"text": "In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes.",
"title": ""
},
{
"docid": "a4957c88aee24ee9223afea8b01a8a62",
"text": "This study examined smartphone user behaviors and their relation to self-reported smartphone addiction. Thirty-four users who did not own smartphones were given instrumented iPhones that logged all phone use over the course of the year-long study. At the conclusion of the study, users were asked to rate their level of addiction to the device. Sixty-two percent agreed or strongly agreed that they were addicted to their iPhones. These users showed differentiated smartphone use as compared to those users who did not indicate an addiction. Addicted users spent twice as much time on their phone and launched applications much more frequently (nearly twice as often) as compared to the non-addicted user. Mail, Messaging, Facebook and the Web drove this use. Surprisingly, Games did not show any difference between addicted and nonaddicted users. Addicted users showed significantly lower time-per-interaction than did non-addicted users for Mail, Facebook and Messaging applications. One addicted user reported that his addiction was problematic, and his use data was beyond three standard deviations from the upper hinge. The implications of the relationship between the logged and self-report data are discussed.",
"title": ""
},
{
"docid": "c3af6eae1bd5f2901914d830280eca48",
"text": "This paper proposes a novel approach for the classification of 3D shapes exploiting surface and volumetric clues inside a deep learning framework. The proposed algorithm uses three different data representations. The first is a set of depth maps obtained by rendering the 3D object. The second is a novel volumetric representation obtained by counting the number of filled voxels along each direction. Finally NURBS surfaces are fitted over the 3D object and surface curvature parameters are selected as the third representation. All the three data representations are fed to a multi-branch Convolutional Neural Network. Each branch processes a different data source and produces a feature vector by using convolutional layers of progressively reduced resolution. The extracted feature vectors are fed to a linear classifier that combines the outputs in order to get the final predictions. Experimental results on the ModelNet dataset show that the proposed approach is able to obtain a state-of-the-art performance.",
"title": ""
},
{
"docid": "c43ad751dade7d0a5a396f95cc904030",
"text": "The electric grid is radically evolving and transforming into the smart grid, which is characterized by improved energy efficiency and manageability of available resources. Energy management (EM) systems, often integrated with home automation systems, play an important role in the control of home energy consumption and enable increased consumer participation. These systems provide consumers with information about their energy consumption patterns and help them adopt energy-efficient behavior. The new generation EM systems leverage advanced analytics and communication technologies to offer consumers actionable information and control features, while ensuring ease of use, availability, security, and privacy. In this article, we present a survey of the state of the art in EM systems, applications, and frameworks. We define a set of requirements for EM systems and evaluate several EM systems in this context. We also discuss emerging trends in this area.",
"title": ""
},
{
"docid": "71e8c35e0f0b5756d14821622a8d0fc5",
"text": "Classic drugs of abuse lead to specific increases in cerebral functional activity and dopamine release in the shell of the nucleus accumbens (the key neural structure for reward, motivation, and addiction). In contrast, caffeine at doses reflecting daily human consumption does not induce a release of dopamine in the shell of the nucleus accumbens but leads to a release of dopamine in the prefrontal cortex, which is consistent with its reinforcing properties.",
"title": ""
}
] | scidocsrr |
acc8ee963ac07519f2056794fab5eb44 | An axiomatic approach for result diversification | [
{
"docid": "c0c7752c6b9416e281c3649e70f9daae",
"text": "Although the study of clustering is centered around an intuitively compelling goal, it has been very difficult to develop a unified framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research community. Here we suggest a formal perspective on the difficulty in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs at work in well-studied clustering techniques such as single-linkage, sum-of-pairs, k-means, and k-median.",
"title": ""
}
] | [
{
"docid": "de08442e673ba8ca91244fedb020796c",
"text": "The differences between the fields of Human-Computer Interaction and Security (HCISec) and Human-Computer Interaction (HCI) have not been investigated very closely. Many HCI methods and procedures have been adopted by HCISec researchers, however the extent to which these apply to the field of HCISec is arguable given the fine balance between improving the ease of use of a secure system and potentially weakening its security. That is to say that the techniques prevalent in HCI are aimed at improving users' effectiveness, efficiency or satisfaction, but they do not take into account the potential threats and vulnerabilities that they can introduce. To address this problem, we propose a security and usability threat model detailing the different factors that are pertinent to the security and usability of secure systems, together with a process for assessing these.",
"title": ""
},
{
"docid": "42984b6e288bb144619d01ba37bfce68",
"text": "Reinforcement learning has steadily improved and outperform human in lots of traditional games since the resurgence of deep neural network. However, these success is not easy to be copied to autonomous driving because the state spaces in real world are extreme complex and action spaces are continuous and fine control is required. Moreover, the autonomous driving vehicles must also keep functional safety under the complex environments. To deal with these challenges, we first adopt the deep deterministic policy gradient (DDPG) algorithm, which has the capacity to handle complex state and action spaces in continuous domain. We then choose The Open Racing Car Simulator (TORCS) as our environment to avoid physical damage. Meanwhile, we select a set of appropriate sensor information from TORCS and design our own rewarder. In order to fit DDPG algorithm to TORCS, we design our network architecture for both actor and critic inside DDPG paradigm. To demonstrate the effectiveness of our model, We evaluate on different modes in TORCS and show both quantitative and qualitative results.",
"title": ""
},
{
"docid": "002abd54753db9928d8e6832d3358084",
"text": "State-of-the-art semantic role labelling systems require large annotated corpora to achieve full performance. Unfortunately, such corpora are expensive to produce and often do not generalize well across domains. Even in domain, errors are often made where syntactic information does not provide sufficient cues. In this paper, we mitigate both of these problems by employing distributional word representations gathered from unlabelled data. While straight-forward word representations of predicates and arguments improve performance, we show that further gains are achieved by composing representations that model the interaction between predicate and argument, and capture full argument spans.",
"title": ""
},
{
"docid": "06a10608b51cc1ae6c7ef653faf637a9",
"text": "WE aLL KnoW how to protect our private or most valuable data from unauthorized access: encrypt it. When a piece of data M is encrypted under a key K to yield a ciphertext C=EncK(M), only the intended recipient (who knows the corresponding secret decryption key S) will be able to invert the encryption function and recover the original plaintext using the decryption algorithm DecS(C)=DecS(EncK(M))=M. Encryption today—in both symmetric (where S=K) and public key versions (where S remains secret even when K is made publicly available)—is widely used to achieve confidentiality in many important and well-known applications: online banking, electronic shopping, and virtual private networks are just a few of the most common applications using encryption, typically as part of a larger protocol, like the TLS protocol used to secure communication over the Internet. Still, the use of encryption to protect valuable or sensitive data can be very limiting and inflexible. Once the data M is encrypted, the corresponding ciphertext C behaves to a large extent as a black box: all we can do with the box is keep it closed or opened in order to access and operate on the data. In many situations this may be exactly what we want. For example, take a remote storage system, where we want to store a large collection of documents or data files. We store the data in encrypted form, and when we want to access a specific piece of data, we retrieve the corresponding ciphertext, decrypting it locally on our own trusted computer. But as soon as we go beyond the simple data storage/ retrieval model, we are in trouble. Say we want the remote system to provide a more complex functionality, like a database system capable of indexing and searching our data, or answering complex relational or semistructured queries. Using standard encryption technology we are immediately faced with a dilemma: either we store our data unencrypted and reveal our precious or sensitive data to the storage/ database service provider, or we encrypt it and make it impossible for the provider to operate on it. If data is encrypted, then answering even a simple counting query (for example, the number of records or files that contain a certain keyword) would typically require downloading and decrypting the entire database content. Homomorphic encryption is a special kind of encryption that allows operating on ciphertexts without decrypting them; in fact, without even knowing the decryption key. For example, given ciphertexts C=EncK(M) and C'=EncK(M'), an additively homomorphic encryption scheme would allow to combine C and C' to obtain EncK(M+M'). Such encryption schemes are immensely useful in the design of complex cryptographic protocols. For example, an electronic voting scheme may collect encrypted votes Ci=EncK(Mi) where each vote Mi is either 0 or 1, and then tally them to obtain the encryption of the outcome C=EncK(M1+..+Mn). This would be decrypted by an appropriate authority that has the decryption key and ability to announce the result, but the entire collection and tallying process would operate on encrypted data without the use of the secret key. (Of course, this is an oversimplified protocol, as many other issues must be addressed in a real election scheme, but it well illustrates the potential usefulness of homomorphic encryption.) To date, all known homomorphic encryption schemes supported essentially only one basic operation, for example, addition. But the potential of fully homomorphic encryption (that is, homomorphic encryption supporting arbitrarily complex computations on ciphertexts) is clear. Think of encrypting your queries before you send them to your favorite search engine, and receive the encryption of the result without the search engine even knowing what the query was. Imagine running your most computationally intensive programs on your large datasets on a cluster of remote computers, as in a cloud computing environment, while keeping both your programs, data, and results encrypted and confidential. The idea of fully homomorphic encryption schemes was first proposed by Rivest, Adleman, and Dertouzos the late 1970s, but remained a mirage for three decades, the never-to-be-found Holy Grail of cryptography. At least until 2008, when Craig Gentry announced a new approach to the construction of fully homomorphic cryptosystems. In the following paper, Gentry describes his innovative method for constructing fully homomorphic encryption schemes, the first credible solution to this long-standing major problem in cryptography and theoretical computer science at large. While much work is still to be done before fully homomorphic encryption can be used in practice, Gentry’s work is clearly a landmark achievement. Before Gentry’s discovery many members of the cryptography research community thought fully homomorphic encryption was impossible to achieve. Now, most cryptographers (me among them) are convinced the Holy Grail exists. In fact, there must be several of them, more or less efficient ones, all out there waiting to be discovered. Gentry gives a very accessible and enjoyable description of his general method to achieve fully homomorphic encryption as well as a possible instantiation of his framework recently proposed by van Dijik, Gentry, Halevi, and Vaikuntanathan. He has taken great care to explain his technically complex results, some of which have their roots in lattice-based cryptography, using a metaphorical tale of a jeweler and her quest to keep her precious materials safe, while at the same time allowing her employees to work on them. Gentry’s homomorphic encryption work is truly worth a read.",
"title": ""
},
{
"docid": "5c74d0cfcbeaebc29cdb58a30436556a",
"text": "Modular decomposition is an effective means to achieve a complex system, but that of current part-component-based does not meet the needs of the positive development of the production. Design Structure Matrix (DSM) can simultaneously reflect the sequence, iteration, and feedback information, and express the parallel, sequential, and coupled relationship between DSM elements. This article, a modular decomposition method, named Design Structure Matrix Clustering modularize method, is proposed, concerned procedures are define, based on sorting calculate and clustering analysis of DSM, according to the rules of rows exchanges and columns exchange with the same serial number. The purpose and effectiveness of DSM clustering modularize method are confirmed through case study of assembly and calibration system for the large equipment.",
"title": ""
},
{
"docid": "8fe5ad58edf4a1c468fd0b6a303729ee",
"text": "Das CDISC Operational Data Model (ODM) ist ein populärer Standard in klinischen Datenmanagementsystemen (CDMS). Er beschreibt sowohl die Struktur einer klinischen Prüfung inklusive der Visiten, Formulare, Datenele mente und Codelisten als auch administrative Informationen wie gültige Nutzeracco unts. Ferner enthält er alle erhobenen klinischen Fakten über die Pro banden. Sein originärer Einsatzzweck liegt in der Archivierung von Studiendatenbanken und dem Austausch klinischer Daten zwischen verschiedenen CDMS. Aufgrund de r reichhaltigen Struktur eignet er sich aber auch für weiterführende Anwendungsfälle. Im Rahmen studentischer Praktika wurden verschied ene Szenarien für funktionale Ergänzungen des freien CDMS OpenClinica unters ucht und implementiert, darunter die Generierung eines Annotated CRF, der Import vo n Studiendaten per Web-Service, das semiautomatisierte Anlegen von Studien so wie der Export von Studiendaten in einen relationalen Data Mart und in ein Forschungs-Data-Warehouse auf Basis von i2b2.",
"title": ""
},
{
"docid": "7ce9ef05d3f4a92f6b187d7986b70be1",
"text": "With the growth in the consumer electronics industry, it is vital to develop an algorithm for ultrahigh definition products that is more effective and has lower time complexity. Image interpolation, which is based on an autoregressive model, has achieved significant improvements compared with the traditional algorithm with respect to image reconstruction, including a better peak signal-to-noise ratio (PSNR) and improved subjective visual quality of the reconstructed image. However, the time-consuming computation involved has become a bottleneck in those autoregressive algorithms. Because of the high time cost, image autoregressive-based interpolation algorithms are rarely used in industry for actual production. In this study, in order to meet the requirements of real-time reconstruction, we use diverse compute unified device architecture (CUDA) optimization strategies to make full use of the graphics processing unit (GPU) (NVIDIA Tesla K80), including a shared memory and register and multi-GPU optimization. To be more suitable for the GPU-parallel optimization, we modify the training window to obtain a more concise matrix operation. Experimental results show that, while maintaining a high PSNR and subjective visual quality and taking into account the I/O transfer time, our algorithm achieves a high speedup of 147.3 times for a Lena image and 174.8 times for a 720p video, compared to the original single-threaded C CPU code with -O2 compiling optimization.",
"title": ""
},
{
"docid": "e7bedfa690b456a7a93e5bdae8fff79c",
"text": "During the past several years, there have been a significant number of researches conducted in the area of semiconductor final test scheduling problems (SFTSP). As specific example of simultaneous multiple resources scheduling problem (SMRSP), intelligent manufacturing planning and scheduling based on meta-heuristic methods, such as Genetic Algorithm (GA), Simulated Annealing (SA), and Particle Swarm Optimization (PSO), have become the common tools for finding satisfactory solutions within reasonable computational times in real settings. However, limited researches were aiming at analyze the effects of interdependent relations during group decision-making activities. Moreover for complex and large problems, local constraints and objectives from each managerial entity, and their contributions towards the global objectives cannot be effectively represented in a single model. In this paper, we propose a novel Cooperative Estimation of Distribution Algorithm (CEDA) to overcome the challenges mentioned before. The CEDA is established based on divide-and-conquer strategy and a co-evolutionary framework. Considerable experiments have been conducted and the results confirmed that CEDA outperforms recent research results for scheduling problems in FMS (Flexible Manufacturing Systems).",
"title": ""
},
{
"docid": "628a4f05cc6c39585bcca8d5f503f277",
"text": "Recent studies have linked atmospheric particulate matter with human health problems. In many urban areas, mobile sources are a major source of particulate matter (PM) and the dominant source of fine particles or PM2.5 (PM smaller than 2.5 pm in aerodynamic diameter). Dynamometer studies have implicated diesel engines as being a significant source of ultrafine particles (< 0.1 microm), which may also exhibit deleterious health impacts. In addition to direct tailpipe emissions, mobile sources contribute to ambient particulate levels by brake and tire wear and by resuspension of particles from pavement. Information about particle emission rates, size distributions, and chemical composition from in-use light-duty (LD) and heavy-duty (HD) vehicles is scarce, especially under real-world operating conditions. To characterize particulate emissions from a limited set of in-use vehicles, we studied on-road emissions from vehicles operating under hot-stabilized conditions, at relatively constant speed, in the Tuscarora Mountain Tunnel along the Pennsylvania Turnpike from May 18 through 23, 1999. There were five specific aims of the study. (1) obtain chemically speciated diesel profiles for the source apportionment of diesel versus other ambient constituents in the air and to determine the chemical species present in real-world diesel emissions; (2) measure particle number and size distribution of chemically speciated particles in the atmosphere; (3) identify, by reference to data in years past, how much change has occurred in diesel exhaust particulate mass; (4) measure particulate emissions from LD gasoline vehicles to determine their contribution to the observed particle levels compared to diesels; and (5) determine changes over time in gas phase emissions by comparing our results with those of previous studies. Comparing the results of this study with our 1992 results, we found that emissions of C8 to C20 hydrocarbons, carbon monoxide (CO), and carbon dioxide (CO2) from HD diesel emissions substantially decreased over the seven-year period. Particulate mass emissions showed a similar trend. Considering a 25-year period, we observed a continued downward trend in HD particulate emissions from approximately 1,100 mg/km in 1974 to 132 mg/km (reported as PM2.5) in this study. The LD particle emission factor was considerably less than the HD value, but given the large fraction of LD vehicles, emissions from this source cannot be ignored. Results of the current study also indicate that both HD and LD vehicles emit ultrafine particles and that these particles are preserved under real-world dilution conditions. Particle number distributions were dominated by ultrafine particles with count mean diameters of 17 to 13 nm depending on fleet composition. These particles appear to be primarily composed of sulfur, indicative of sulfuric acid emission and nucleation. Comparing the 1992 and 1999 HD emission rates, we observed a 48% increase in the NOx/CO2 emissions ratio. This finding supports the assumption that many new-technology diesel engines conserve fuel but increase NOx emissions.",
"title": ""
},
{
"docid": "929c3c0bd01056851952660ffd90673a",
"text": "SUMMARY: The Food and Drug Administration (FDA) is issuing this proposed rule to amend the 1994 tentative final monograph or proposed rule (the 1994 TFM) for over-the-counter (OTC) antiseptic drug products. In this proposed rule, we are proposing to establish conditions under which OTC antiseptic products intended for use by health care professionals in a hospital setting or other health care situations outside the hospital are generally recognized as safe and effective. In the 1994 TFM, certain antiseptic active ingredients were proposed as being generally recognized as safe for use in health care settings based on safety data evaluated by FDA as part of its ongoing review of OTC antiseptic drug products. However, in light of more recent scientific developments, we are now proposing that additional safety data are necessary to support the safety of antiseptic active ingredients for these uses. We also are proposing that all health care antiseptic active ingredients have in vitro data characterizing the ingredient's antimicrobial properties and in vivo clinical simulation studies showing that specified log reductions in the amount of certain bacteria are achieved using the ingredient. DATES: Submit electronic or written comments by October 28, 2015. See section VIII of this document for the proposed effective date of a final rule based on this proposed rule. ADDRESSES: You may submit comments by any of the following methods: Electronic Submissions Submit electronic comments in the following way: • Federal eRulemaking Portal: http:// www.regulations.gov. Follow the instructions for submitting comments.",
"title": ""
},
{
"docid": "4cfc991f626f6fc9d131514985863127",
"text": "Simultaneous recordings from large neural populations are becoming increasingly common. An important feature of population activity is the trial-to-trial correlated fluctuation of spike train outputs from recorded neuron pairs. Similar to the firing rate of single neurons, correlated activity can be modulated by a number of factors, from changes in arousal and attentional state to learning and task engagement. However, the physiological mechanisms that underlie these changes are not fully understood. We review recent theoretical results that identify three separate mechanisms that modulate spike train correlations: changes in input correlations, internal fluctuations and the transfer function of single neurons. We first examine these mechanisms in feedforward pathways and then show how the same approach can explain the modulation of correlations in recurrent networks. Such mechanistic constraints on the modulation of population activity will be important in statistical analyses of high-dimensional neural data.",
"title": ""
},
{
"docid": "b791d4e531f893e529595110d0331822",
"text": "Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data.",
"title": ""
},
{
"docid": "e5e6f213762e3c89f536a0ea2fc554f8",
"text": "New and emerging terahertz technology applications make this a very exciting time for the scientists, engineers, and technologists in the field. New sensors and detectors have been the primary driving force behind the unprecedented progress in terahertz technology, but in the last decade extraordinary developments in terahertz sources have also occurred. Driven primarily by space based missions for Earth, planetary, and astrophysical science, frequency multiplied sources have dominated the field in recent years, at least in the 2-3 THz frequency range. More recently, over the past few years terahertz quantum cascade lasers (QCLs) have made tremendous strides, finding increasing applications in terahertz systems. Vacuum electronic devices and photonic sources are not far behind either. In this article, the various technologies for terahertz sources are reviewed, and future trends are discussed.",
"title": ""
},
{
"docid": "89a73876c24508d92050f2055292d641",
"text": "We study the fundamental problem of computing distances between nodes in large graphs such as the web graph and social networks. Our objective is to be able to answer distance queries between pairs of nodes in real time. Since the standard shortest path algorithms are expensive, our approach moves the time-consuming shortest-path computation offline, and at query time only looks up precomputed values and performs simple and fast computations on these precomputed values. More specifically, during the offline phase we compute and store a small \"sketch\" for each node in the graph, and at query-time we look up the sketches of the source and destination nodes and perform a simple computation using these two sketches to estimate the distance.",
"title": ""
},
{
"docid": "1cbdf72cbb83763040abedb74748f6cd",
"text": "Cyber attack is one of the most rapidly growing threats to the world of cutting edge information technology. As new tools and techniques are emerging everyday to make information accessible over the Internet, so is their vulnerabilities. Cyber defense is inevitable in order to ensure reliable and secure communication and transmission of information. Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) are the major technologies dominating in the area of cyber defense. Tremendous efforts have already been put in intrusion detection research for decades but intrusion prevention research is still in its infancy. This paper provides a comprehensive review of the current research in both Intrusion Detection Systems and recently emerged Intrusion Prevention Systems. Limitations of current research works in both fields are also discussed in conclusion.",
"title": ""
},
{
"docid": "d4641f30306d5e653da94ccdeec2239c",
"text": "Terpenes are economically and ecologically important phytochemicals. Their synthesis is controlled by the terpene synthase (TPS) gene family, which is highly diversified throughout the plant kingdom. The plant family Myrtaceae are characterised by especially high terpene concentrations, and considerable variation in terpene profiles. Many Myrtaceae are grown commercially for terpene products including the eucalypts Corymbia and Eucalyptus. Eucalyptus grandis has the largest TPS gene family of plants currently sequenced, which is largely conserved in the closely related E. globulus. However, the TPS gene family has been well studied only in these two eucalypt species. The recent assembly of two Corymbia citriodora subsp. variegata genomes presents an opportunity to examine the conservation of this important gene family across more divergent eucalypt lineages. Manual annotation of the TPS gene family in C. citriodora subsp. variegata revealed a similar overall number, and relative subfamily representation, to that previously reported in E. grandis and E. globulus. Many of the TPS genes were in physical clusters that varied considerably between Eucalyptus and Corymbia, with several instances of translocation, expansion/contraction and loss. Notably, there was greater conservation in the subfamilies involved in primary metabolism than those involved in secondary metabolism, likely reflecting different selective constraints. The variation in cluster size within subfamilies and the broad conservation between the eucalypts in the face of this variation are discussed, highlighting the potential contribution of selection, concerted evolution and stochastic processes. These findings provide the foundation to better understand terpene evolution within the ecologically and economically important Myrtaceae.",
"title": ""
},
{
"docid": "00b13f673d9e6efc1edebf2641204ea6",
"text": "Two studies examined the effects of implicit and explicit priming of aging stereotypes. Implicit primes had a significant effect on older adults' memory, with positive primes associated with greater recall than negative primes. With explicit primes, older adults were able to counteract the impact of negative stereotypes when the cues were relatively subtle, but blatant stereotype primes suppressed performance regardless of prime type. No priming effects under either presentation condition were obtained for younger adults, indicating that the observed implicit effects are specific to those for whom the stereotype is self-relevant. Findings emphasize the importance of social-situational factors in determining older adults' memory performance and contribute to the delineation of situations under which stereotypes are most influential.",
"title": ""
},
{
"docid": "d1069c06341e484e7f3b5ab7a4a49a2d",
"text": "In a \"nutrition transition\", the consumption of foods high in fats and sweeteners is increasing throughout the developing world. The transition, implicated in the rapid rise of obesity and diet-related chronic diseases worldwide, is rooted in the processes of globalization. Globalization affects the nature of agri-food systems, thereby altering the quantity, type, cost and desirability of foods available for consumption. Understanding the links between globalization and the nutrition transition is therefore necessary to help policy makers develop policies, including food policies, for addressing the global burden of chronic disease. While the subject has been much discussed, tracing the specific pathways between globalization and dietary change remains a challenge. To help address this challenge, this paper explores how one of the central mechanisms of globalization, the integration of the global marketplace, is affecting the specific diet patterns. Focusing on middle-income countries, it highlights the importance of three major processes of market integration: (I) production and trade of agricultural goods; (II) foreign direct investment in food processing and retailing; and (III) global food advertising and promotion. The paper reveals how specific policies implemented to advance the globalization agenda account in part for some recent trends in the global diet. Agricultural production and trade policies have enabled more vegetable oil consumption; policies on foreign direct investment have facilitated higher consumption of highly-processed foods, as has global food marketing. These dietary outcomes also reflect the socioeconomic and cultural context in which these policies are operating. An important finding is that the dynamic, competitive forces unleashed as a result of global market integration facilitates not only convergence in consumption habits (as is commonly assumed in the \"Coca-Colonization\" hypothesis), but adaptation to products targeted at different niche markets. This convergence-divergence duality raises the policy concern that globalization will exacerbate uneven dietary development between rich and poor. As high-income groups in developing countries accrue the benefits of a more dynamic marketplace, lower-income groups may well experience convergence towards poor quality obseogenic diets, as observed in western countries. Global economic policies concerning agriculture, trade, investment and marketing affect what the world eats. They are therefore also global food and health policies. Health policy makers should pay greater attention to these policies in order to address some of the structural causes of obesity and diet-related chronic diseases worldwide, especially among the groups of low socioeconomic status.",
"title": ""
},
{
"docid": "c9ff6e6c47b6362aaba5f827dd1b48f2",
"text": "IEC 62056 for upper-layer protocols and IEEE 802.15.4g for communication infrastructure are promising means of advanced metering infrastructure (AMI) in Japan. However, since the characteristics of a communication system based on these combined technologies have yet to be identified, this paper gives the communication failure rates and latency acquired by calculations. In addition, the calculation results suggest some adequate AMI configurations, and show its extensibility in consideration of the usage environment.",
"title": ""
},
{
"docid": "7b314cd0c326cb977b92f4907a0ed737",
"text": "This is the third part of a series of papers that provide a comprehensive survey of the techniques for tracking maneuvering targets without addressing the so-called measurement-origin uncertainty. Part I [1] and Part II [2] deal with general target motion models and ballistic target motion models, respectively. This part surveys measurement models, including measurement model-based techniques, used in target tracking. Models in Cartesian, sensor measurement, their mixed, and other coordinates are covered. The stress is on more recent advances — topics that have received more attention recently are discussed in greater details.",
"title": ""
}
] | scidocsrr |
31676b77fc40d569e619caec0dd4fc17 | A Pan-Cancer Proteogenomic Atlas of PI3K/AKT/mTOR Pathway Alterations. | [
{
"docid": "99ff0acb6d1468936ae1620bc26c205f",
"text": "The Cancer Genome Atlas (TCGA) has used the latest sequencing and analysis methods to identify somatic variants across thousands of tumours. Here we present data and analytical results for point mutations and small insertions/deletions from 3,281 tumours across 12 tumour types as part of the TCGA Pan-Cancer effort. We illustrate the distributions of mutation frequencies, types and contexts across tumour types, and establish their links to tissues of origin, environmental/carcinogen influences, and DNA repair defects. Using the integrated data sets, we identified 127 significantly mutated genes from well-known (for example, mitogen-activated protein kinase, phosphatidylinositol-3-OH kinase, Wnt/β-catenin and receptor tyrosine kinase signalling pathways, and cell cycle control) and emerging (for example, histone, histone modification, splicing, metabolism and proteolysis) cellular processes in cancer. The average number of mutations in these significantly mutated genes varies across tumour types; most tumours have two to six, indicating that the number of driver mutations required during oncogenesis is relatively small. Mutations in transcriptional factors/regulators show tissue specificity, whereas histone modifiers are often mutated across several cancer types. Clinical association analysis identifies genes having a significant effect on survival, and investigations of mutations with respect to clonal/subclonal architecture delineate their temporal orders during tumorigenesis. Taken together, these results lay the groundwork for developing new diagnostics and individualizing cancer treatment.",
"title": ""
}
] | [
{
"docid": "6d00686ad4d2d589a415d810b2fcc876",
"text": "The accuracy of voice activity detection (VAD) is one of the most important factors which influence the capability of the speech recognition system, how to detect the endpoint precisely in noise environment is still a difficult task. In this paper, we proposed a new VAD method based on Mel-frequency cepstral coefficients (MFCC) similarity. We first extracts the MFCC of a voice signal for each frame, followed by calculating the MFCC Euclidean distance and MFCC correlation coefficient of the test frame and the background noise, Finally, give the experimental results. The results show that at low SNR circumstance, MFCC similarity detection method is better than traditional short-term energy method. Compared with Euclidean distance measure method, correlation coefficient is better.",
"title": ""
},
{
"docid": "070d23b78d7808a19bde68f0ccdd7587",
"text": "Deep learning is playing a more and more important role in our daily life and scientific research such as autonomous systems, intelligent life and data mining. However, numerous studies have showed that deep learning with superior performance on many tasks may suffer from subtle perturbations constructed by attacker purposely, called adversarial perturbations, which are imperceptible to human observers but completely effect deep neural network models. The emergence of adversarial attacks has led to questions about neural networks. Therefore, machine learning security and privacy are becoming an increasingly active research area. In this paper, we summarize the prevalent methods for the generating adversarial attacks according to three groups. We elaborated on their ideas and principles of generation. We further analyze the common limitations of these methods and implement statistical experiments of the last layer output on CleverHans to reveal that the detection of adversarial samples is not as difficult as it seems and can be achieved in some relatively simple manners.",
"title": ""
},
{
"docid": "e21aed852a892cbede0a31ad84d50a65",
"text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.09.010 ⇑ Corresponding author. Tel.: +1 662 915 5519. E-mail addresses: crego@bus.olemiss.edu (C. R (D. Gamboa), fred.glover@colorado.edu (F. Glover), colin.j.osterman@navy.mil (C. Osterman). Heuristics for the traveling salesman problem (TSP) have made remarkable advances in recent years. We survey the leading methods and the special components responsible for their successful implementations, together with an experimental analysis of computational tests on a challenging and diverse set of symmetric and asymmetric TSP benchmark problems. The foremost algorithms are represented by two families, deriving from the Lin–Kernighan (LK) method and the stem-and-cycle (S&C) method. We show how these families can be conveniently viewed within a common ejection chain framework which sheds light on their similarities and differences, and gives clues about the nature of potential enhancements to today’s best methods that may provide additional gains in solving large and difficult TSPs. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "47de26ecd5f759afa7361c7eff9e9b25",
"text": "At many teaching hospitals, it is common practice for on-call radiology residents to interpret radiology examinations; such reports are later reviewed and revised by an attending physician before being used for any decision making. In case there are substantial problems in the resident’s initial report, the resident is called and the problems are reviewed to prevent similar future reporting errors. However, due to the large volume of reports produced, attending physicians rarely discuss the problems side by side with residents, thus missing an educational opportunity. In this work, we introduce a pipeline to discriminate between reports with significant discrepancies and those with non-significant discrepancies. The former contain severe errors or mis-interpretations, thus representing a great learning opportunity for the resident; the latter presents only minor differences (often stylistic) and have a minor role in the education of a resident. By discriminating between the two, the proposed system could flag those reports that an attending radiology should definitely review with residents under their supervision. We evaluated our approach on 350 manually annotated radiology reports sampled from a collection of tens of thousands. The proposed classifier achieves an Area Under the Curve (AUC) of 0.837, which represent a 14% improvement over the baselines. Furthermore, the classifier reduces the False Negative Rate (FNR) by 52%, a desirable performance metric for any recall-oriented task such as the one studied",
"title": ""
},
{
"docid": "48485e967c5aa345a53b91b47cc0e6d0",
"text": "The buccinator musculomucosal flaps are actually considered the main reconstructive option for small-moderate defects of the oral mucosa. In this paper we present our experience with the posteriorly based buccinator musculomucosal flap. A retrospective review was performed of all patients who had had a Bozola flap reconstruction at the Operative Unit of Maxillo-Facial Surgery of Parma, Italy, between 2003 and 2010. The Bozola flap was used in 19 patients. In most cases they had defects of the palate (n=12). All flaps were harvested successfully and no major complications occurred. Minor complications were observed in two cases. At the end of the follow up all patients returned to a normal diet without alterations of speech and swallowing. We consider the Bozola flap the first choice for the reconstruction of defects involving the palate, the cheek and the postero-lateral tongue and floor of the mouth.",
"title": ""
},
{
"docid": "d7f743ddff9863b046ab91304b37a667",
"text": "In sensor networks, passive localization can be performed by exploiting the received signals of unknown emitters. In this paper, the Time of Arrival (TOA) measurements are investigated. Often, the unknown time of emission is eliminated by calculating the difference between two TOA measurements where Time Difference of Arrival (TDOA) measurements are obtained. In TOA processing, additionally, the unknown time of emission is to be estimated. Therefore, the target state is extended by the unknown time of emission. A comparison is performed investigating the attainable accuracies for localization based on TDOA and TOA measurements given by the Cramér-Rao Lower Bound (CRLB). Using the Maximum Likelihood estimator, some characteristic features of the cost functions are investigated indicating a better performance of the TOA approach. But counterintuitive, Monte Carlo simulations do not support this indication, but show the comparability of TDOA and TOA localization.",
"title": ""
},
{
"docid": "8a37001733b0ee384277526bd864fe04",
"text": "Miscreants use DDoS botnets to attack a victim via a large number of malware-infected hosts, combining the bandwidth of the individual PCs. Such botnets have thus a high potential to render targeted services unavailable. However, the actual impact of attacks by DDoS botnets has never been evaluated. In this paper, we monitor C&C servers of 14 DirtJumper and Yoddos botnets and record the DDoS targets of these networks. We then aim to evaluate the availability of the DDoS victims, using a variety of measurements such as TCP response times and analyzing the HTTP content. We show that more than 65% of the victims are severely affected by the DDoS attacks, while also a few DDoS attacks likely failed.",
"title": ""
},
{
"docid": "7c1c7eb4f011ace0734dd52759ce077f",
"text": "OBJECTIVES\nTo investigate the treatment effects of bilateral robotic priming combined with the task-oriented approach on motor impairment, disability, daily function, and quality of life in patients with subacute stroke.\n\n\nDESIGN\nA randomized controlled trial.\n\n\nSETTING\nOccupational therapy clinics in medical centers.\n\n\nSUBJECTS\nThirty-one subacute stroke patients were recruited.\n\n\nINTERVENTIONS\nParticipants were randomly assigned to receive bilateral priming combined with the task-oriented approach (i.e., primed group) or to the task-oriented approach alone (i.e., unprimed group) for 90 minutes/day, 5 days/week for 4 weeks. The primed group began with the bilateral priming technique by using a bimanual robot-aided device.\n\n\nMAIN MEASURES\nMotor impairments were assessed by the Fugal-Meyer Assessment, grip strength, and the Box and Block Test. Disability and daily function were measured by the modified Rankin Scale, the Functional Independence Measure, and actigraphy. Quality of life was examined by the Stroke Impact Scale.\n\n\nRESULTS\nThe primed and unprimed groups improved significantly on most outcomes over time. The primed group demonstrated significantly better improvement on the Stroke Impact Scale strength subscale ( p = 0.012) and a trend for greater improvement on the modified Rankin Scale ( p = 0.065) than the unprimed group.\n\n\nCONCLUSION\nBilateral priming combined with the task-oriented approach elicited more improvements in self-reported strength and disability degrees than the task-oriented approach by itself. Further large-scale research with at least 31 participants in each intervention group is suggested to confirm the study findings.",
"title": ""
},
{
"docid": "52c0c6d1deacdca44df5000b2b437c78",
"text": "This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation- maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.",
"title": ""
},
{
"docid": "2e987add43a584bdd0a67800ad28c5f8",
"text": "The bones of elderly people with osteoporosis are susceptible to either traumatic fracture as a result of external impact, such as what happens during a fall, or even spontaneous fracture without trauma as a result of muscle contraction [1, 2]. Understanding the fracture behavior of bone tissue will help researchers find proper treatments to strengthen the bone in order to prevent such fractures, and design better implants to reduce the chance of secondary fracture after receiving the implant.",
"title": ""
},
{
"docid": "863db7439c2117e36cc2a789b557a665",
"text": "A core brain network has been proposed to underlie a number of different processes, including remembering, prospection, navigation, and theory of mind [Buckner, R. L., & Carroll, D. C. Self-projection and the brain. Trends in Cognitive Sciences, 11, 49–57, 2007]. This purported network—medial prefrontal, medial-temporal, and medial and lateral parietal regions—is similar to that observed during default-mode processing and has been argued to represent self-projection [Buckner, R. L., & Carroll, D. C. Self-projection and the brain. Trends in Cognitive Sciences, 11, 49–57, 2007] or scene-construction [Hassabis, D., & Maguire, E. A. Deconstructing episodic memory with construction. Trends in Cognitive Sciences, 11, 299–306, 2007]. To date, no systematic and quantitative demonstration of evidence for this common network has been presented. Using the activation likelihood estimation (ALE) approach, we conducted four separate quantitative meta-analyses of neuroimaging studies on: (a) autobiographical memory, (b) navigation, (c) theory of mind, and (d) default mode. A conjunction analysis between these domains demonstrated a high degree of correspondence. We compared these findings to a separate ALE analysis of prospection studies and found additional correspondence. Across all domains, and consistent with the proposed network, correspondence was found within the medial-temporal lobe, precuneus, posterior cingulate, retrosplenial cortex, and the temporo-parietal junction. Additionally, this study revealed that the core network extends to lateral prefrontal and occipital cortices. Autobiographical memory, prospection, theory of mind, and default mode demonstrated further reliable involvement of the medial prefrontal cortex and lateral temporal cortices. Autobiographical memory and theory of mind, previously studied as distinct, exhibited extensive functional overlap. These findings represent quantitative evidence for a core network underlying a variety of cognitive domains.",
"title": ""
},
{
"docid": "566412870c83e5e44fabc50487b9d994",
"text": "The influence of technology in the field of gambling innovation continues to grow at a rapid pace. After a brief overview of gambling technologies and deregulation issues, this review examines the impact of technology on gambling by highlighting salient factors in the rise of Internet gambling (i.e., accessibility, affordability, anonymity, convenience, escape immersion/dissociation, disinhibition, event frequency, asociability, interactivity, and simulation). The paper also examines other factors in relation to Internet gambling including the relationship between Internet addiction and Internet gambling addiction. The paper ends by overviewing some of the social issues surrounding Internet gambling (i.e., protection of the vulnerable, Internet gambling in the workplace, electronic cash, and unscrupulous operators). Recommendations for Internet gambling operators are also provided.",
"title": ""
},
{
"docid": "28574c82a49b096b11f1b78b5d62e425",
"text": "A major reason for the current reproducibility crisis in the life sciences is the poor implementation of quality control measures and reporting standards. Improvement is needed, especially regarding increasingly complex in vitro methods. Good Cell Culture Practice (GCCP) was an effort from 1996 to 2005 to develop such minimum quality standards also applicable in academia. This paper summarizes recent key developments in in vitro cell culture and addresses the issues resulting for GCCP, e.g. the development of induced pluripotent stem cells (iPSCs) and gene-edited cells. It further deals with human stem-cell-derived models and bioengineering of organo-typic cell cultures, including organoids, organ-on-chip and human-on-chip approaches. Commercial vendors and cell banks have made human primary cells more widely available over the last decade, increasing their use, but also requiring specific guidance as to GCCP. The characterization of cell culture systems including high-content imaging and high-throughput measurement technologies increasingly combined with more complex cell and tissue cultures represent a further challenge for GCCP. The increasing use of gene editing techniques to generate and modify in vitro culture models also requires discussion of its impact on GCCP. International (often varying) legislations and market forces originating from the commercialization of cell and tissue products and technologies are further impacting on the need for the use of GCCP. This report summarizes the recommendations of the second of two workshops, held in Germany in December 2015, aiming map the challenge and organize the process or developing a revised GCCP 2.0.",
"title": ""
},
{
"docid": "59c2e1dcf41843d859287124cc655b05",
"text": "Atherosclerotic cardiovascular disease (ASCVD) is the most common cause of death in most Western countries. Nutrition factors contribute importantly to this high risk for ASCVD. Favourable alterations in diet can reduce six of the nine major risk factors for ASCVD, i.e. high serum LDL-cholesterol levels, high fasting serum triacylglycerol levels, low HDL-cholesterol levels, hypertension, diabetes and obesity. Wholegrain foods may be one the healthiest choices individuals can make to lower the risk for ASCVD. Epidemiological studies indicate that individuals with higher levels (in the highest quintile) of whole-grain intake have a 29 % lower risk for ASCVD than individuals with lower levels (lowest quintile) of whole-grain intake. It is of interest that neither the highest levels of cereal fibre nor the highest levels of refined cereals provide appreciable protection against ASCVD. Generous intake of whole grains also provides protection from development of diabetes and obesity. Diets rich in wholegrain foods tend to decrease serum LDL-cholesterol and triacylglycerol levels as well as blood pressure while increasing serum HDL-cholesterol levels. Whole-grain intake may also favourably alter antioxidant status, serum homocysteine levels, vascular reactivity and the inflammatory state. Whole-grain components that appear to make major contributions to these protective effects are: dietary fibre; vitamins; minerals; antioxidants; phytosterols; other phytochemicals. Three servings of whole grains daily are recommended to provide these health benefits.",
"title": ""
},
{
"docid": "66370e97fba315711708b13e0a1c9600",
"text": "Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.",
"title": ""
},
{
"docid": "a2f65eb4a81bc44ea810d834ab33d891",
"text": "This survey provides the basis for developing research in the area of mobile manipulator performance measurement, an area that has relatively few research articles when compared to other mobile manipulator research areas. The survey provides a literature review of mobile manipulator research with examples of experimental applications. The survey also provides an extensive list of planning and control references as this has been the major research focus for mobile manipulators which factors into performance measurement of the system. The survey then reviews performance metrics considered for mobile robots, robot arms, and mobile manipulators and the systems that measure their performance, including machine tool measurement systems through dynamic motion tracking systems. Lastly, the survey includes a section on research that has occurred for performance measurement of robots, mobile robots, and mobile manipulators beginning with calibration, standards, and mobile manipulator artifacts that are being considered for evaluation of mobile manipulator performance.",
"title": ""
},
{
"docid": "d56807574d6185c6e3cd9a8e277f8006",
"text": "There is a substantial literature on e-government that discusses information and communication technology (ICT) as an instrument for reducing the role of bureaucracy in government organizations. The purpose of this paper is to offer a critical discussion of this literature and to provide a complementary argument, which favors the use of ICT in the public sector to support the operations of bureaucratic organizations. Based on the findings of a case study – of the Venice municipality in Italy – the paper discusses how ICT can be used to support rather than eliminate bureaucracy. Using the concepts of e-bureaucracy and functional simplification and closure, the paper proposes evidence and support for the argument that bureaucracy should be preserved and enhanced where e-government policies are concerned. Functional simplification and closure are very valuable concepts for explaining why this should be a viable approach.",
"title": ""
},
{
"docid": "77bbeb9510f4c9000291910bf06e4a22",
"text": "Traveling Salesman Problem is an important optimization issue of many fields such as transportation, logistics and semiconductor industries and it is about finding a Hamiltonian path with minimum cost. To solve this problem, many researchers have proposed different approaches including metaheuristic methods. Artificial Bee Colony algorithm is a well known swarm based optimization technique. In this paper we propose a new Artificial Bee Colony algorithm called Combinatorial ABC for Traveling Salesman Problem. Simulation results show that this Artificial Bee Colony algorithm can be used for combinatorial optimization problems.",
"title": ""
},
{
"docid": "dbafea1fbab901ff5a53f752f3bfb4b8",
"text": "Three studies were conducted to test the hypothesis that high trait aggressive individuals are more affected by violent media than are low trait aggressive individuals. In Study 1, participants read film descriptions and then chose a film to watch. High trait aggressive individuals were more likely to choose a violent film to watch than were low trait aggressive individuals. In Study 2, participants reported their mood before and after the showing of a violet or nonviolent videotape. High trait aggressive individuals felt more angry after viewing the violent videotape than did low trait aggressive individuals. In Study 3, participants first viewed either a violent or a nonviolent videotape and then competed with an \"opponent\" on a reaction time task in which the loser received a blast of unpleasant noise. Videotape violence was more likely to increase aggression in high trait aggressive individuals than in low trait aggressive individuals.",
"title": ""
},
{
"docid": "de761c4e3e79b5b4d056552e0a71a7b6",
"text": "A novel multiple-input multiple-output (MIMO) dielectric resonator antenna (DRA) for long term evolution (LTE) femtocell base stations is described. The proposed antenna is able to transmit and receive information independently using TE and HE modes in the LTE bands 12 (698-716 MHz, 728-746 MHz) and 17 (704-716 MHz, 734-746 MHz). A systematic design method based on perturbation theory is proposed to induce mode degeneration for MIMO operation. Through perturbing the boundary of the DRA, the amount of energy stored by a specific mode is changed as well as the resonant frequency of that mode. Hence, by introducing an adequate boundary perturbation, the TE and HE modes of the DRA will resonate at the same frequency and share a common impedance bandwidth. The simulated mutual coupling between the modes was as low as - 40 dB . It was estimated that in a rich scattering environment with an Signal-to-Noise Ratio (SNR) of 20 dB per receiver branch, the proposed MIMO DRA was able to achieve a channel capacity of 11.1 b/s/Hz (as compared to theoretical maximum 2 × 2 capacity of 13.4 b/s/Hz). Our experimental measurements successfully demonstrated the design methodology proposed in this work.",
"title": ""
}
] | scidocsrr |
4a20916cf1ff2f9e74067374f231ac8f | A hybrid support vector machines and logistic regression approach for forecasting intermittent demand of spare parts | [
{
"docid": "386cd963cf70c198b245a3251c732180",
"text": "Support vector machines (SVMs) are promising methods for the prediction of -nancial timeseries because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. This study applies SVM to predicting the stock price index. In addition, this study examines the feasibility of applying SVM in -nancial forecasting by comparing it with back-propagation neural networks and case-based reasoning. The experimental results show that SVM provides a promising alternative to stock market prediction. c © 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "700d3e2cb64624df33ef411215d073ab",
"text": "A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.",
"title": ""
}
] | [
{
"docid": "814923f39e568d9e56da015c7bb311bf",
"text": "Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application. However, we currently lack quantitative methods for model assessment. Because of this, while many GAN variants are being proposed, we have relatively little understanding of their relative abilities. In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training. We observe consistency across the various proposed metrics and, interestingly, the test-time metrics do not favour networks that use the same training-time criterion. We also compare the proposed metrics to human perceptual scores.",
"title": ""
},
{
"docid": "2b8aa68835bc61f3d0b5da39441185c9",
"text": "This position paper explores the threat to individual privacy due to the widespread use of consumer drones. Present day consumer drones are equipped with sensors such as cameras and microphones, and their types and numbers can be well expected to increase in future. Drone operators have absolute control on where the drones fly and what the on-board sensors record with no options for bystanders to protect their privacy. This position paper proposes a policy language that allows homeowners, businesses, governments, and privacy-conscious individuals to specify location access-control for drones, and discusses how these policy-based controls might be realized in practice. This position paper also explores the potential future problem of managing consumer drone traffic that is likely to emerge with increasing use of consumer drones for various tasks. It proposes a privacy preserving traffic management protocol for directing drones towards their respective destinations without requiring drones to reveal their destinations.",
"title": ""
},
{
"docid": "d80580490ac7d968ff08c2a9ee159028",
"text": "Statistical relational AI (StarAI) aims at reasoning and learning in noisy domains described in terms of objects and relationships by combining probability with first-order logic. With huge advances in deep learning in the current years, combining deep networks with first-order logic has been the focus of several recent studies. Many of the existing attempts, however, only focus on relations and ignore object properties. The attempts that do consider object properties are limited in terms of modelling power or scalability. In this paper, we develop relational neural networks (RelNNs) by adding hidden layers to relational logistic regression (the relational counterpart of logistic regression). We learn latent properties for objects both directly and through general rules. Back-propagation is used for training these models. A modular, layer-wise architecture facilitates utilizing the techniques developed within deep learning community to our architecture. Initial experiments on eight tasks over three real-world datasets show that RelNNs are promising models for relational learning.",
"title": ""
},
{
"docid": "ca6eb17d02fd8055ea37ca66306f8bb5",
"text": "Advances in satellite imagery presents unprecedented opportunities for understanding natural and social phenomena at global and regional scales. Although the field of satellite remote sensing has evaluated imperative questions to human and environmental sustainability, scaling those techniques to very high spatial resolutions at regional scales remains a challenge. Satellite imagery is now more accessible with greater spatial, spectral and temporal resolution creating a data bottleneck in identifying the content of images. Because satellite images are unlabeled, unsupervised methods allow us to organize images into coherent groups or clusters. However, the performance of unsupervised methods, like all other machine learning methods, depends on features. Recent studies using features from pre-trained networks have shown promise for learning in new datasets. This suggests that features from pre-trained networks can be used for learning in temporally and spatially dynamic data sources such as satellite imagery. It is not clear, however, which features from which layer and network architecture should be used for learning new tasks. In this paper, we present an approach to evaluate the transferability of features from pre-trained Deep Convolutional Neural Networks for satellite imagery. We explore and evaluate different features and feature combinations extracted from various deep network architectures, and systematically evaluate over 2,000 network-layer combinations. In addition, we test the transferability of our engineered features and learned features from an unlabeled dataset to a different labeled dataset. Our feature engineering and learning are done on the unlabeled Draper Satellite Chronology dataset, and we test on the labeled UC Merced Land dataset to achieve near state-of-the-art classification results. These results suggest that even without any or minimal training, these networks can generalize well to other datasets. This method could be useful in the task of clustering unlabeled images and other unsupervised machine learning tasks.",
"title": ""
},
{
"docid": "22ab14bba18c990d2b096cb5aeaa6314",
"text": "Airport traffic consists of aircraft performing landing, takeoff and taxi procedures. It is controlled by air traffic controller (ATC). To safely perform this task he/she uses traffic surveillance equipment and voice communication systems to issue control clearances. One of the most important indicators of this process efficiency is practical airport capacity, which refers to the number of aircraft handled and delays which occurred at the same time. This paper presents the concept of airport traffic modelling using coloured, timed, stochastic Petri nets. By the example of the airport with one runway and simultaneous takeoff and landing operations, the applicability of such models in analysis of air traffic processes is shown. Simulation experiments, in which CPN Tools package was used, showed the impact of the initial formation of landing aircraft stream on airside capacity of the airport. They also showed the possibility of its increase by changes in the organisation of takeoff and landing processes.",
"title": ""
},
{
"docid": "e7bb89000329245bccdecbc80549109c",
"text": "This paper presents a tutorial overview of the use of coupling between nonadjacent resonators to produce transmission zeros at real frequencies in microwave filters. Multipath coupling diagrams are constructed and the relative phase shifts of multiple paths are observed to produce the known responses of the cascaded triplet and quadruplet sections. The same technique is also used to explore less common nested cross-coupling structures and to predict their behavior. A discussion of the effects of nonzero electrical length coupling elements is presented. Finally, a brief categorization of the various synthesis and implementation techniques available for these types of filters is given.",
"title": ""
},
{
"docid": "dcfe8e834a7726aa49ea37368ffc6ff6",
"text": "Object recognition and categorization are computationally difficult tasks that are performed effortlessly by humans. Attempts have been made to emulate the computations in different parts of the primate cortex to gain a better understanding of the cortex and to design brain–machine interfaces that speak the same language as the brain. The HMAX model proposed by Riesenhuber and Poggio and extended by Serre <etal/> attempts to truly model the visual cortex. In this paper, we provide a spike-based implementation of the HMAX model, demonstrating its ability to perform biologically-plausible MAX computations as well as classify basic shapes. The spike-based model consists of 2514 neurons and 17<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\thinspace$</tex> </formula>305 synapses (S1 Layer: 576 neurons and 7488 synapses, C1 Layer: 720 neurons and 2880 synapses, S2 Layer: 576 neurons and 1152 synapses, C2 Layer: 640 neurons and 5760 synapses, and Classifier: 2 neurons and 25 synapses). Without the limits of the retina model, it will take the system 2 min to recognize rectangles and triangles in 24<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>24 pixel images. This can be reduced to 4.8 s by rearranging the lookup table so that neurons which have similar responses to the same input(s) can be placed on the same row and affected in parallel.",
"title": ""
},
{
"docid": "60556a58af0196cc0032d7237636ec52",
"text": "This paper investigates what students understand about algorithm efficiency before receiving any formal instruction on the topic. We gave students a challenging search problem and two solutions, then asked them to identify the more efficient solution and to justify their choice. Many students did not use the standard worst-case analysis of algorithms; rather they chose other metrics, including average-case, better for more cases, better in all cases, one algorithm being more correct, and better for real-world scenarios. Students were much more likely to choose the correct algorithm when they were asked to trace the algorithms on specific examples; this was true even if they traced the algorithms incorrectly.",
"title": ""
},
{
"docid": "6fc290610e99d66248c6d9e8c4fa4f02",
"text": "Ali, M. A. 2014. Understanding Cancer Mutations by Genome Editing. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Medicine 1054. 37 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9106-2. Mutational analyses of cancer genomes have identified novel candidate cancer genes with hitherto unknown function in cancer. To enable phenotyping of mutations in such genes, we have developed a scalable technology for gene knock-in and knock-out in human somatic cells based on recombination-mediated construct generation and a computational tool to design gene targeting constructs. Using this technology, we have generated somatic cell knock-outs of the putative cancer genes ZBED6 and DIP2C in human colorectal cancer cells. In ZBED6 cells complete loss of functional ZBED6 was validated and loss of ZBED6 induced the expression of IGF2. Whole transcriptome and ChIP-seq analyses revealed relative enrichment of ZBED6 binding sites at upregulated genes as compared to downregulated genes. The functional annotation of differentially expressed genes revealed enrichment of genes related to cell cycle and cell proliferation and the transcriptional modulator ZBED6 affected the cell growth and cell cycle of human colorectal cancer cells. In DIP2Ccells, transcriptome sequencing revealed 780 differentially expressed genes as compared to their parental cells including the tumour suppressor gene CDKN2A. The DIP2C regulated genes belonged to several cancer related processes such as angiogenesis, cell structure and motility. The DIP2Ccells were enlarged and grew slower than their parental cells. To be able to directly compare the phenotypes of mutant KRAS and BRAF in colorectal cancers, we have introduced a KRAS allele in RKO BRAF cells. The expression of the mutant KRAS allele was confirmed and anchorage independent growth was restored in KRAS cells. The differentially expressed genes both in BRAF and KRAS mutant cells included ERBB, TGFB and histone modification pathways. Together, the isogenic model systems presented here can provide insights to known and novel cancer pathways and can be used for drug discovery.",
"title": ""
},
{
"docid": "26787002ed12cc73a3920f2851449c5e",
"text": "This article brings together three current themes in organizational behavior: (1) a renewed interest in assessing person-situation interactional constructs, (2) the quantitative assessment of organizational culture, and (3) the application of \"Q-sort,\" or template-matching, approaches to assessing person-situation interactions. Using longitudinal data from accountants and M.B.A. students and cross-sectional data from employees of government agencies and public accounting firms, we developed and validated an instrument for assessing personorganization fit, the Organizational Culture Profile (OCP). Results suggest that the dimensionality of individual preferences for organizational cultures and the existence of these cultures are interpretable. Further, person-organization fit predicts job satisfaction and organizational commitment a year after fit was measured and actual turnover after two years. This evidence attests to the importance of understanding the fit between individuals' preferences and organizational cultures.",
"title": ""
},
{
"docid": "5bd2a042a1309792da03577d3eaf24dc",
"text": "Movement primitives are a well established approach for encoding and executing movements. While the primitives themselves have been extensively researched, the concept of movement primitive libraries has not received similar attention. Libraries of movement primitives represent the skill set of an agent. Primitives can be queried and sequenced in order to solve specific tasks. The goal of this work is to segment unlabeled demonstrations into a representative set of primitives. Our proposed method differs from current approaches by taking advantage of the often neglected, mutual dependencies between the segments contained in the demonstrations and the primitives to be encoded. By exploiting this mutual dependency, we show that we can improve both the segmentation and the movement primitive library. Based on probabilistic inference our novel approach segments the demonstrations while learning a probabilistic representation of movement primitives. We demonstrate our method on two real robot applications. First, the robot segments sequences of different letters into a library, explaining the observed trajectories. Second, the robot segments demonstrations of a chair assembly task into a movement primitive library. The library is subsequently used to assemble the chair in an order not present in the demonstrations.",
"title": ""
},
{
"docid": "a9c00556e3531ba81cc009ae3f5a1816",
"text": "A systematic, tiered approach to assess the safety of engineered nanomaterials (ENMs) in foods is presented. The ENM is first compared to its non-nano form counterpart to determine if ENM-specific assessment is required. Of highest concern from a toxicological perspective are ENMs which have potential for systemic translocation, are insoluble or only partially soluble over time or are particulate and bio-persistent. Where ENM-specific assessment is triggered, Tier 1 screening considers the potential for translocation across biological barriers, cytotoxicity, generation of reactive oxygen species, inflammatory response, genotoxicity and general toxicity. In silico and in vitro studies, together with a sub-acute repeat-dose rodent study, could be considered for this phase. Tier 2 hazard characterisation is based on a sentinel 90-day rodent study with an extended range of endpoints, additional parameters being investigated case-by-case. Physicochemical characterisation should be performed in a range of food and biological matrices. A default assumption of 100% bioavailability of the ENM provides a 'worst case' exposure scenario, which could be refined as additional data become available. The safety testing strategy is considered applicable to variations in ENM size within the nanoscale and to new generations of ENM.",
"title": ""
},
{
"docid": "0965f1390233e71da72fbc8f37394add",
"text": "Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.",
"title": ""
},
{
"docid": "e6088779901bd4bfaf37a3a1784c3854",
"text": "There has been recently a great progress in the field of automatically generated knowledge bases and corresponding disambiguation systems that are capable of mapping text mentions onto canonical entities. Efforts like the before mentioned have enabled researchers and analysts from various disciplines to semantically “understand” contents. However, most of the approaches have been specifically designed for the English language and in particular support for Arabic is still in its infancy. Since the amount of Arabic Web contents (e.g. in social media) has been increasing dramatically over the last years, we see a great potential for endeavors that support an entity-level analytics of these data. To this end, we have developed a framework called AIDArabic that extends the existing AIDA system by additional components that allow the disambiguation of Arabic texts based on an automatically generated knowledge base distilled from Wikipedia. Even further, we overcome the still existing sparsity of the Arabic Wikipedia by exploiting the interwiki links between Arabic and English contents in Wikipedia, thus, enriching the entity catalog as well as disambiguation context.",
"title": ""
},
{
"docid": "80ae11d4c626c564023ab70b64bde846",
"text": "This paper presents the results of the study carried out for the determination of the residential, commercial and industrial consumers daily load curves based on field measurements performed by the Utilities of Electric Energy of São Paulo State, Brazil. A methodology for the aggregation of these loads to determine the expected loading in equipment or in a preset part of the distribution network by using the representative daily curves of each consumer’s activity and the monthly energy consumption of the connected consumers is also presented.",
"title": ""
},
{
"docid": "fcbb5b1adf14b443ef0d4a6f939140fe",
"text": "In this paper we make the case for IoT edge offloading, which strives to exploit the resources on edge computing devices by offloading fine-grained computation tasks from the cloud closer to the users and data generators (i.e., IoT devices). The key motive is to enhance performance, security and privacy for IoT services. Our proposal bridges the gap between cloud computing and IoT by applying a divide and conquer approach over the multi-level (cloud, edge and IoT) information pipeline. To validate the design of IoT edge offloading, we developed a unikernel-based prototype and evaluated the system under various hardware and network conditions. Our experimentation has shown promising results and revealed the limitation of existing IoT hardware and virtualization platforms, shedding light on future research of edge computing and IoT.",
"title": ""
},
{
"docid": "cec6e899c23dd65881f84cca81205eb0",
"text": "A fuzzy graph (f-graph) is a pair G : ( σ, μ) where σ is a fuzzy subset of a set S and μ is a fuzzy relation on σ. A fuzzy graph H : ( τ, υ) is called a partial fuzzy subgraph of G : (σ, μ) if τ (u) ≤ σ(u) for every u and υ (u, v) ≤ μ(u, v) for every u and v . In particular we call a partial fuzzy subgraph H : ( τ, υ) a fuzzy subgraph of G : ( σ, μ ) if τ (u) = σ(u) for every u in τ * and υ (u, v) = μ(u, v) for every arc (u, v) in υ*. A connected f-graph G : ( σ, μ) is a fuzzy tree(f-tree) if it has a fuzzy spannin g subgraph F : (σ, υ), which is a tree, where for all arcs (x, y) not i n F there exists a path from x to y in F whose strength is more than μ(x, y). A path P of length n is a sequence of disti nct nodes u0, u1, ..., un such that μ(ui−1, ui) > 0, i = 1, 2, ..., n and the degree of membershi p of a weakest arc is defined as its strength. If u 0 = un and n≥ 3, then P is called a cycle and a cycle P is called a fuzzy cycle(f-cycle) if it cont ains more than one weakest arc . The strength of connectedness between two nodes x and y is efined as the maximum of the strengths of all paths between x and y and is denot ed by CONNG(x, y). An x − y path P is called a strongest x − y path if its strength equal s CONNG(x, y). An f-graph G : ( σ, μ) is connected if for every x,y in σ ,CONNG(x, y) > 0. In this paper, we offer a survey of selected recent results on fuzzy graphs.",
"title": ""
},
{
"docid": "c84a0f630b4fb2e547451d904e1c63a5",
"text": "Deep neural network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on “informative” examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the persample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally, on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.",
"title": ""
},
{
"docid": "473968c14db4b189af126936fd5486ca",
"text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.",
"title": ""
},
{
"docid": "a458f16b84f40dc0906658a93d4b2efa",
"text": "We investigated the usefulness of Sonazoid contrast-enhanced ultrasonography (Sonazoid-CEUS) in the diagnosis of hepatocellular carcinoma (HCC). The examination was performed by comparing the images during the Kupffer phase of Sonazoid-CEUS with superparamagnetic iron oxide magnetic resonance (SPIO-MRI). The subjects were 48 HCC nodules which were histologically diagnosed (well-differentiated HCC, n = 13; moderately differentiated HCC, n = 30; poorly differentiated HCC, n = 5). We performed Sonazoid-CEUS and SPIO-MRI on all subjects. In the Kupffer phase of Sonazoid-CEUS, the differences in the contrast agent uptake between the tumorous and non-tumorous areas were quantified as the Kupffer phase ratio and compared. In the SPIO-MRI, it was quantified as the SPIO-intensity index. We then compared these results with the histological differentiation of HCCs. The Kupffer phase ratio decreased as the HCCs became less differentiated (P < 0.0001; Kruskal–Wallis test). The SPIO-intensity index also decreased as HCCs became less differentiated (P < 0.0001). A positive correlation was found between the Kupffer phase ratio and the SPIO-MRI index (r = 0.839). In the Kupffer phase of Sonazoid-CEUS, all of the moderately and poorly differentiated HCCs appeared hypoechoic and were detected as a perfusion defect, whereas the majority (9 of 13 cases, 69.2%) of the well-differentiated HCCs had an isoechoic pattern. The Kupffer phase images of Sonazoid-CEUS and SPIO-MRI matched perfectly (100%) in all of the moderately and poorly differentiated HCCs. Sonazoid-CEUS is useful for estimating histological grading of HCCs. It is a modality that could potentially replace SPIO-MRI.",
"title": ""
}
] | scidocsrr |
1418ec82ce97fa32e4b51cf663172f69 | Image denoising via adaptive soft-thresholding based on non-local samples | [
{
"docid": "c6a44d2313c72e785ae749f667d5453c",
"text": "Donoho and Johnstone (1992a) proposed a method for reconstructing an unknown function f on [0; 1] from noisy data di = f(ti) + zi, i = 0; : : : ; n 1, ti = i=n, zi iid N(0; 1). The reconstruction f̂ n is de ned in the wavelet domain by translating all the empirical wavelet coe cients of d towards 0 by an amount p 2 log(n) = p n. We prove two results about that estimator. [Smooth]: With high probability f̂ n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model.",
"title": ""
},
{
"docid": "db913c6fe42f29496e13aa05a6489c9b",
"text": "As a convex relaxation of the low rank matrix factorization problem, the nuclear norm minimization has been attracting significant research interest in recent years. The standard nuclear norm minimization regularizes each singular value equally to pursue the convexity of the objective function. However, this greatly restricts its capability and flexibility in dealing with many practical problems (e.g., denoising), where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, where the singular values are assigned different weights. The solutions of the WNNM problem are analyzed under different weighting conditions. We then apply the proposed WNNM algorithm to image denoising by exploiting the image nonlocal self-similarity. Experimental results clearly show that the proposed WNNM algorithm outperforms many state-of-the-art denoising algorithms such as BM3D in terms of both quantitative measure and visual perception quality.",
"title": ""
},
{
"docid": "4d9cf5a29ebb1249772ebb6a393c5a4e",
"text": "This paper presents a novel strategy for high-fidelity image restoration by characterizing both local smoothness and nonlocal self-similarity of natural images in a unified statistical manner. The main contributions are three-fold. First, from the perspective of image statistics, a joint statistical modeling (JSM) in an adaptive hybrid space-transform domain is established, which offers a powerful mechanism of combining local smoothness and nonlocal self-similarity simultaneously to ensure a more reliable and robust estimation. Second, a new form of minimization functional for solving the image inverse problem is formulated using JSM under a regularization-based framework. Finally, in order to make JSM tractable and robust, a new Split Bregman-based algorithm is developed to efficiently solve the above severely underdetermined inverse problem associated with theoretical proof of convergence. Extensive experiments on image inpainting, image deblurring, and mixed Gaussian plus salt-and-pepper noise removal applications verify the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
}
] | [
{
"docid": "fc04f9bd523e3d2ca57ab3a8e730397b",
"text": "Interactive, distributed, and embedded systems often behave stochastically, for example, when inputs, message delays, or failures conform to a probability distribution. However, reasoning analytically about the behavior of complex stochastic systems is generally infeasible. While simulations of systems are commonly used in engineering practice, they have not traditionally been used to reason about formal specifications. Statistical model checking (SMC) addresses this weakness by using a simulation-based approach to reason about precise properties specified in a stochastic temporal logic. A specification for a communication system may state that within some time bound, the probability that the number of messages in a queue will be greater than 5 must be less than 0.01. Using SMC, executions of a stochastic system are first sampled, after which statistical techniques are applied to determine whether such a property holds. While the output of sample-based methods are not always correct, statistical inference can quantify the confidence in the result produced. In effect, SMC provides a more widely applicable and scalable alternative to analysis of properties of stochastic systems using numerical and symbolic methods. SMC techniques have been successfully applied to analyze systems with large state spaces in areas such as computer networking, security, and systems biology. In this article, we survey SMC algorithms, techniques, and tools, while emphasizing current limitations and tradeoffs between precision and scalability.",
"title": ""
},
{
"docid": "6b933bbad26efaf65724d0c923330e75",
"text": "This paper presents a 138-170 GHz active frequency doubler implemented in a 0.13 μm SiGe BiCMOS technology with a peak output power of 5.6 dBm and peak power-added efficiency of 7.6%. The doubler achieves a peak conversion gain of 4.9 dB and consumes only 36 mW of DC power at peak drive through the use of a push-push frequency doubling stage optimized for low drive power, along with a low-power output buffer. To the best of our knowledge, this doubler achieves the highest output power, efficiency, and fundamental frequency suppression of all D-band and G-band SiGe HBT frequency doublers to date.",
"title": ""
},
{
"docid": "deaa86a5fe696d887140e29d0b2ae22c",
"text": "The high prevalence of spinal stenosis results in a large volume of MRI imaging, yet interpretation can be time-consuming with high inter-reader variability even among the most specialized radiologists. In this paper, we develop an efficient methodology to leverage the subject-matter-expertise stored in large-scale archival reporting and image data for a deep-learning approach to fully-automated lumbar spinal stenosis grading. Specifically, we introduce three major contributions: (1) a natural-language-processing scheme to extract level-by-level ground-truth labels from free-text radiology reports for the various types and grades of spinal stenosis (2) accurate vertebral segmentation and disc-level localization using a U-Net architecture combined with a spine-curve fitting method, and (3) a multiinput, multi-task, and multi-class convolutional neural network to perform central canal and foraminal stenosis grading on both axial and sagittal imaging series inputs with the extracted report-derived labels applied to corresponding imaging level segments. This study uses a large dataset of 22796 disc-levels extracted from 4075 patients. We achieve state-ofthe-art performance on lumbar spinal stenosis classification and expect the technique will increase both radiology workflow efficiency and the perceived value of radiology reports for referring clinicians and patients.",
"title": ""
},
{
"docid": "af0b4e07ec7a60d0021e8bddde5e8b92",
"text": "Social Network Sites (SNSs) offer a plethora of privacy controls, but users rarely exploit all of these mechanisms, nor do they do so in the same manner. We demonstrate that SNS users instead adhere to one of a small set of distinct privacy management strategies that are partially related to their level of privacy feature awareness. Using advanced Factor Analysis methods on the self-reported privacy behaviors and feature awareness of 308 Facebook users, we extrapolate six distinct privacy management strategies, including: Privacy Maximizers, Selective Sharers, Privacy Balancers, Self-Censors, Time Savers/Consumers, and Privacy Minimalists and six classes of privacy proficiency based on feature awareness, ranging from Novices to Experts. We then cluster users on these dimensions to form six distinct behavioral profiles of privacy management strategies and six awareness profiles for privacy proficiency. We further analyze these privacy profiles to suggest opportunities for training and education, interface redesign, and new approaches for personalized privacy recommendations.",
"title": ""
},
{
"docid": "0fefdbc0dbe68391ccfc912be937f4fc",
"text": "Privacy and security are essential requirements in practical biometric systems. In order to prevent the theft of biometric patterns, it is desired to modify them through revocable and non invertible transformations called Cancelable Biometrics. In this paper, we propose an efficient algorithm for generating a Cancelable Iris Biometric based on Sectored Random Projections. Our algorithm can generate a new pattern if the existing one is stolen, retain the original recognition performance and prevent extraction of useful information from the transformed patterns. Our method also addresses some of the drawbacks of existing techniques and is robust to degradations due to eyelids and eyelashes.",
"title": ""
},
{
"docid": "5bd9b0de217f2a537a5fadf99931d149",
"text": "A linear programming (LP) method for security dispatch and emergency control calculations on large power systems is presented. The method is reliable, fast, flexible, easy to program, and requires little computer storage. It works directly with the normal power-system variables and limits, and incorporates the usual sparse matrix techniques. An important feature of the method is that it handles multi-segment generator cost curves neatly and efficiently.",
"title": ""
},
{
"docid": "968ea2dcfd30492a81a71be25f16e350",
"text": "Tree-structured data are becoming ubiquitous nowadays and manipulating them based on similarity is essential for many applications. The generally accepted similarity measure for trees is the edit distance. Although similarity search has been extensively studied, searching for similar trees is still an open problem due to the high complexity of computing the tree edit distance. In this paper, we propose to transform tree-structured data into an approximate numerical multidimensional vector which encodes the original structure information. We prove that the L1 distance of the corresponding vectors, whose computational complexity is O(|T1| + |T2|), forms a lower bound for the edit distance between trees. Based on the theoretical analysis, we describe a novel algorithm which embeds the proposed distance into a filter-and-refine framework to process similarity search on tree-structured data. The experimental results show that our algorithm reduces dramatically the distance computation cost. Our method is especially suitable for accelerating similarity query processing on large trees in massive datasets.",
"title": ""
},
{
"docid": "aac5f1bd2459a19c42bb0c48e99e22f0",
"text": "This study examined multiple levels of adolescents' interpersonal functioning, including general peer relations (peer crowd affiliations, peer victimization), and qualities of best friendships and romantic relationships as predictors of symptoms of depression and social anxiety. An ethnically diverse sample of 421 adolescents (57% girls; 14 to 19 years) completed measures of peer crowd affiliation, peer victimization, and qualities of best friendships and romantic relationships. Peer crowd affiliations (high and low status), positive qualities in best friendships, and the presence of a dating relationship protected adolescents against feelings of social anxiety, whereas relational victimization and negative interactions in best friendships predicted high social anxiety. In contrast, affiliation with a high-status peer crowd afforded some protection against depressive affect; however, relational victimization and negative qualities of best friendships and romantic relationships predicted depressive symptoms. Some moderating effects for ethnicity were observed. Findings indicate that multiple aspects of adolescents' social relations uniquely contribute to feelings of internal distress. Implications for research and preventive interventions are discussed.",
"title": ""
},
{
"docid": "0cd46ebc56a6f640931ac4a81676968f",
"text": "An improved direct torque controlled induction motor drive is reported in this paper. It is established that the conventional direct torque controlled drive has more torque and flux ripples in steady state, which result in poor torque response, acoustic noise and incorrect speed estimations. Hysteresis controllers also make the switching frequency of voltage source inverter a variable quantity. A strategy of variable duty ratio control scheme is proposed to increase switching frequency, and adjust the width of hysteresis bands according to the switching frequency. This technique minimizes torque and current ripples, improves torque response, and reduces switching losses in spite of its simplicity. Simulation results establish the improved performance of the proposed direct torque control method compared to conventional methods.",
"title": ""
},
{
"docid": "3177e9dd683fdc66cbca3bd985f694b1",
"text": "Online communities allow millions of people who would never meet in person to interact. People join web-based discussion boards, email lists, and chat rooms for friendship, social support, entertainment, and information on technical, health, and leisure activities [24]. And they do so in droves. One of the earliest networks of online communities, Usenet, had over nine million unique contributors, 250 million messages, and approximately 200,000 active groups in 2003 [27], while the newer MySpace, founded in 2003, attracts a quarter million new members every day [27].",
"title": ""
},
{
"docid": "18216c0745ae3433b3b7f89bb7876a49",
"text": "This paper presents research using full body skeletal movements captured using video-based sensor technology developed by Vicon Motion Systems, to train a machine to identify different human emotions. The Vicon system uses a series of 6 cameras to capture lightweight markers placed on various points of the body in 3D space, and digitizes movement into x, y, and z displacement data. Gestural data from five subjects was collected depicting four emotions: sadness, joy, anger, and fear. Experimental results with different machine learning techniques show that automatic classification of this data ranges from 84% to 92% depending on how it is calculated. In order to put these automatic classification results into perspective a user study on the human perception of the same data was conducted with average classification accuracy of 93%.",
"title": ""
},
{
"docid": "695264db0ca1251ab0f63b04d41c68cd",
"text": "Reading comprehension tasks test the ability of models to process long-term context and remember salient information. Recent work has shown that relatively simple neural methods such as the Attention Sum-Reader can perform well on these tasks; however, these systems still significantly trail human performance. Analysis suggests that many of the remaining hard instances are related to the inability to track entity-references throughout documents. This work focuses on these hard entity tracking cases with two extensions: (1) additional entity features, and (2) training with a multi-task tracking objective. We show that these simple modifications improve performance both independently and in combination, and we outperform the previous state of the art on the LAMBADA dataset, particularly on difficult entity examples.",
"title": ""
},
{
"docid": "cbc6986bf415292292b7008ae4d13351",
"text": "In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.",
"title": ""
},
{
"docid": "827e9045f932b146a8af66224e114be6",
"text": "Using a common set of attributes to determine which methodology to use in a particular data warehousing project.",
"title": ""
},
{
"docid": "89267dbf693643ea53696c7d545254ea",
"text": "Cognitive dissonance theory is applicable to very limited areas of consumer behavior according to the author. Published findings in support of the theory are equivocal; they fail to show that cognitive dissonance is the only possible cause of observed \"dissonance-reducing\" behavior. Experimental evidences are examined and their weaknesses pointed out by the author to justify his position. He also provides suggestions regarding the circumstances under which dissonance reduction may be useful in increasing the repurchase probability of a purchased brand.",
"title": ""
},
{
"docid": "c68633905f8bbb759c71388819e9bfa9",
"text": "An additional mechanical mechanism for a passive parallelogram-based exoskeleton arm-support is presented. It consists of several levers and joints and an attached extension coil spring. The additional mechanism has two favourable features. On the one hand it exhibits an almost iso-elastic behaviour whereby the lifting force of the mechanism is constant for a wide working range. Secondly, the value of the supporting force can be varied by a simple linear movement of a supporting joint. Furthermore a standard tension spring can be used to gain the desired behavior. The additional mechanism is a 4-link mechanism affixed to one end of the spring within the parallelogram arm-support. It has several geometrical parameters which influence the overall behaviour. A standard optimisation routine with constraints on the parameters is used to find an optimal set of geometrical parameters. Based on the optimized geometrical parameters a prototype was constructed and tested. It is a lightweight wearable system, with a weight of 1.9 kg. Detailed experiments reveal a difference between measured and calculated forces. These variations can be explained by a 60 % higher pre load force of the tension spring and a geometrical offset in the construction.",
"title": ""
},
{
"docid": "70ba0f4938630e07d9b145216a01177a",
"text": "For some decades radiation therapy has been proved successful in cancer treatment. It is the major task of clinical radiation treatment planning to realise on the one hand a high level dose of radiation in the cancer tissue in order to obtain maximum tumour control. On the other hand it is obvious that it is absolutely necessary to keep in the tissue outside the tumour, particularly in organs at risk, the unavoidable radiation as low as possible. No doubt, these two objectives of treatment planning – high level dose in the tumour, low radiation outside the tumour – have a basically contradictory nature. Therefore, it is no surprise that inverse mathematical models with dose distribution bounds tend to be infeasible in most cases. Thus, there is need for approximations compromising between overdosing the organs at risk and underdosing the target volume. Differing from the currently used time consuming iterative approach, which measures deviation from an ideal (non-achievable) treatment plan using recursively trial-and-error weights for the organs of interest, we go a new way trying to avoid a priori weight choices and consider the treatment planning problem as a multiple objective linear programming problem: with each organ of interest, target tissue as well as organs at risk, we associate an objective function measuring the maximal deviation from the prescribed doses. We build up a data base of relatively few efficient solutions representing and approximating the variety of Pareto solutions of the multiple objective linear programming problem. This data base can be easily scanned by physicians looking for an adequate treatment plan with the aid of an appropriate online tool. 1 The inverse radiation treatment problem – an introduction Every year, in Germany about 450.000 individuals are diagnosed with life-threatening forms of cancer. About 60% of these patients are treated with radiation; half of them are considered curable because their tumours are localised and susceptible to radiation. Nevertheless, despite the use of the best radiation therapy methods available, one third of these “curable” patients – nearly 40.000 people each year – die with primary tumours still active at the original site. Why does this occur ? Experts in the field have looked at the reasons for these failures and have concluded that radiation therapy planning – in particular in complicated anatomical situations – is often inadequate, providing either too little radiation to the tumour or too much radiation to nearby healthy tissue. Effective radiation therapy planning for treating malignent tumours is always a tightrope walk between ineffective underdose of tumour tissue – the target volume – and dangerous overdose of organs at risk being relevant for maintaining life quality of the cured patient. Therefore, it is the challenging task of a radiation therapy planner to realise a certain high dose level conform to the shape of the target volume in order to have a good prognosis for tumour control and to avoid overdose in relevant healthy tissue nearby. Part of this challenge is the computer aided representation of the relevant parts of the body. Modern scanning methods like computer tomography (CT), magnetic resonance tomography 1 on sabbatical leave at the Department of Engineering Science, University of Auckland, New Zealand",
"title": ""
},
{
"docid": "f5b02bdd74772ff2454a475e44077c8e",
"text": "This paper presents a new method - adversarial advantage actor-critic (Adversarial A2C), which significantly improves the efficiency of dialogue policy learning in task-completion dialogue systems. Inspired by generative adversarial networks (GAN), we train a discriminator to differentiate responses/actions generated by dialogue agents from responses/actions by experts. Then, we incorporate the discriminator as another critic into the advantage actor-critic (A2C) framework, to encourage the dialogue agent to explore state-action within the regions where the agent takes actions similar to those of the experts. Experimental results in a movie-ticket booking domain show that the proposed Adversarial A2C can accelerate policy exploration efficiently.",
"title": ""
},
{
"docid": "5a2c04519e5e810daed299140a0c398c",
"text": "Satisfying stringent customer requirement of visually detectable solder joint termination for high reliability applications requires the implementation of robust wettable flank strategies. One strategy involves the exposition of the sidewall via partial-cut singulation, where the exposed surface could be made wettable through tin (Sn) electroplating process. Herein, we report our systematic approach in evaluating the viability of mechanical partial-cut singulation process to produce Sn-plateable sidewalls, enabling the wettable flank technology using an automotive QFN package are technology carrier. Optimization DOE produced robust set of parameters showing that mechanical partial cut is a promising solution to produce sidewalls appropriate for Sn electroplating, synergistically yielding excellent wettable flanks.",
"title": ""
}
] | scidocsrr |
ea0415d9f5220aa71bd4a8705e11de49 | A MapReduce solution for associative classification of big data | [
{
"docid": "44ea81d223e3c60c7b4fd1192ca3c4ba",
"text": "Existing classification and rule learning algorithms in machine learning mainly use heuristic/greedy search to find a subset of regularities (e.g., a decision tree or a set of rules) in data for classification. In the past few years, extensive research was done in the database community on learning rules using exhaustive search under the name of association rule mining. The objective there is to find all rules in data that satisfy the user-specified minimum support and minimum confidence. Although the whole set of rules may not be used directly for accurate classification, effective and efficient classifiers have been built using the rules. This paper aims to improve such an exhaustive search based classification system CBA (Classification Based on Associations). The main strength of this system is that it is able to use the most accurate rules for classification. However, it also has weaknesses. This paper proposes two new techniques to deal with these weaknesses. This results in remarkably accurate classifiers. Experiments on a set of 34 benchmark datasets show that on average the new techniques reduce the error of CBA by 17% and is superior to CBA on 26 of the 34 datasets. They reduce the error of the decision tree classifier C4.5 by 19%, and improve performance on 29 datasets. Similar good results are also achieved against the existing classification systems, RIPPER, LB and a Naïve-Bayes",
"title": ""
},
{
"docid": "55b405991dc250cd56be709d53166dca",
"text": "In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as item set concise representations, redundancy reduction, and post processing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient post processing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the post processing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the post processing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process. KeywordsClustering, classification, and association rules, interactive data exploration and discovery, knowledge management applications.",
"title": ""
},
{
"docid": "5abe5696969eca4d19a55e3492af03a8",
"text": "In the era of big data, analyzing and extracting knowledge from large-scale data sets is a very interesting and challenging task. The application of standard data mining tools in such data sets is not straightforward. Hence, a new class of scalable mining method that embraces the huge storage and processing capacity of cloud platforms is required. In this work, we propose a novel distributed partitioning methodology for prototype reduction techniques in nearest neighbor classification. These methods aim at representing original training data sets as a reduced number of instances. Their main purposes are to speed up the classification process and reduce the storage requirements and sensitivity to noise of the nearest neighbor rule. However, the standard prototype reduction methods cannot cope with very large data sets. To overcome this limitation, we develop a MapReduce-based framework to distribute the functioning of these algorithms through a cluster of computing elements, proposing several algorithmic strategies to integrate multiple partial solutions (reduced sets of prototypes) into a single one. The proposed model enables prototype reduction algorithms to be applied over big data classification problems without significant accuracy loss. We test the speeding up capabilities of our model with data sets up to 5.7 millions of Email addresses: triguero@decsai.ugr.es (Isaac Triguero), dperalta@decsai.ugr.es (Daniel Peralta), jaume.bacardit@newcastle.ac.uk (Jaume Bacardit), sglopez@ujaen.es (Salvador Garćıa), herrera@decsai.ugr.es (Francisco Herrera) Preprint submitted to Neurocomputing March 3, 2014 instances. The results show that this model is a suitable tool to enhance the performance of the nearest neighbor classifier with big data.",
"title": ""
}
] | [
{
"docid": "9d37fa004b92180faccf7d8e22452919",
"text": "Modern AI and robotic systems are characterized by a high and ever-increasing level of autonomy. At the same time, their applications in fields such as autonomous driving, service robotics and digital personal assistants move closer to humans. From the combination of both developments emerges the field of AI ethics which recognizes that the actions of autonomous machines entail moral dimensions and tries to answer the question of how we can build moral machines. In this paper we argue for taking inspiration from Aristotelian virtue ethics by showing that it forms a suitable combination with modern AI due to its focus on learning from experience. We furthermore propose that imitation learning from moral exemplars, a central concept in virtue ethics, can solve the value alignment problem. Finally, we show that an intelligent system endowed with the virtues of temperance and friendship to humans would not pose a control problem as it would not have the desire for limitless",
"title": ""
},
{
"docid": "e50b074abe37cc8caec8e3922347e0d9",
"text": "Subjectivity and sentiment analysis (SSA) has recently gained considerable attention, but most of the resources and systems built so far are tailored to English and other Indo-European languages. The need for designing systems for other languages is increasing, especially as blogging and micro-blogging websites become popular throughout the world. This paper surveys different techniques for SSA for Arabic. After a brief synopsis about Arabic, we describe the main existing techniques and test corpora for Arabic SSA that have been introduced in the literature.",
"title": ""
},
{
"docid": "80224e331eacb3d3e0ec3a35a0582341",
"text": "This paper describes a frequency-tunable phase inverter based on a slot-line resonator for the first time. The control circuit is designed and located on the defected ground. None of dc block capacitors are needed in the microstrip line. A wide tuning frequency range is accomplished by the use of the slot-line resonator with two varactors and a single control voltage. A 180-degree phase inverter is achieved by means of reversing electric field with two metallic via holes connecting the microstrip and ground plane. The graphic method is used to estimate the operation frequency. For verification, a frequency-tunable phase inverter is fabricated and measured. The measured results show a wide tuning frequency range from 1.1 GHz to 1.75 GHz with better than 20-dB return loss. The measured results are in good agreement with the simulated ones.",
"title": ""
},
{
"docid": "d698f181eb7682d9bf98b3bc103abaac",
"text": "Current database research identified the use of computational power of GPUs as a way to increase the performance of database systems. As GPU algorithms are not necessarily faster than their CPU counterparts, it is important to use the GPU only if it will be beneficial for query processing. In a general database context, only few research projects address hybrid query processing, i.e., using a mix of CPUand GPU-based processing to achieve optimal performance. In this paper, we extend our CPU/GPU scheduling framework to support hybrid query processing in database systems. We point out fundamental problems and propose an algorithm to create a hybrid query plan for a query using our scheduling framework. Additionally, we provide cost metrics, which consider the possible overlapping of data transfers and computation on the GPU. Furthermore, we present algorithms to create hybrid query plans for query sequences and query trees.",
"title": ""
},
{
"docid": "6ac996c20f036308f36c7b667babe876",
"text": "Patents are a very useful source of technical information. The public availability of patents over the Internet, with for some databases (eg. Espacenet) the assurance of a constant format, allows the development of high value added products using this information source and provides an easy way to analyze patent information. This simple and powerful tool facilitates the use of patents in academic research, in SMEs and in developing countries providing a way to use patents as a ideas resource thus improving technological innovation.",
"title": ""
},
{
"docid": "58bfe45d6f2e8bdb2f641290ee6f0b86",
"text": "Intimate partner violence (IPV) is a common phenomenon worldwide. However, there is a relative dearth of qualitative research exploring IPV in which men are the victims of their female partners. The present study used a qualitative approach to explore how Portuguese men experience IPV. Ten male victims (aged 35–75) who had sought help from domestic violence agencies or from the police were interviewed. Transcripts were analyzed using QSR NVivo10 and coded following thematic analysis. The results enhance our understanding of both the nature and dynamics of the violence that men experience as well as the negative impact of violence on their lives. This study revealed the difficulties that men face in the process of seeking help, namely differences in treatment of men versus women victims. It also highlights that help seeking had a negative emotional impact for most of these men. Finally, this study has important implications for practitioners and underlines macro-level social recommendations for raising awareness about this phenomenon, including the need for changes in victims’ services and advocacy for gender-inclusive campaigns and responses.",
"title": ""
},
{
"docid": "426d3b0b74eacf4da771292abad06739",
"text": "Brain tumor is considered as one of the deadliest and most common form of cancer both in children and in adults. Consequently, determining the correct type of brain tumor in early stages is of significant importance to devise a precise treatment plan and predict patient's response to the adopted treatment. In this regard, there has been a recent surge of interest in designing Convolutional Neural Networks (CNNs) for the problem of brain tumor type classification. However, CNNs typically require large amount of training data and can not properly handle input transformations. Capsule networks (referred to as CapsNets) are brand new machine learning architectures proposed very recently to overcome these shortcomings of CNNs, and posed to revolutionize deep learning solutions. Of particular interest to this work is that Capsule networks are robust to rotation and affine transformation, and require far less training data, which is the case for processing medical image datasets including brain Magnetic Resonance Imaging (MRI) images. In this paper, we focus to achieve the following four objectives: (i) Adopt and incorporate CapsNets for the problem of brain tumor classification to design an improved architecture which maximizes the accuracy of the classification problem at hand; (ii) Investigate the over-fitting problem of CapsNets based on a real set of MRI images; (iii) Explore whether or not CapsNets are capable of providing better fit for the whole brain images or just the segmented tumor, and; (iv) Develop a visualization paradigm for the output of the CapsNet to better explain the learned features. Our results show that the proposed approach can successfully overcome CNNs for the brain tumor classification problem.",
"title": ""
},
{
"docid": "3edf5d1cce2a26fbf5c2cc773649629b",
"text": "We conducted three experiments to investigate the mental images associated with idiomatic phrases in English. Our hypothesis was that people should have strong conventional images for many idioms and that the regularity in people's knowledge of their images for idioms is due to the conceptual metaphors motivating the figurative meanings of idioms. In the first study, subjects were asked to form and describe their mental images for different idiomatic expressions. Subjects were then asked a series of detailed questions about their images regarding the causes and effects of different events within their images. We found high consistency in subjects' images of idioms with similar figurative meanings despite differences in their surface forms (e.g., spill the beans and let the cat out of the bag). Subjects' responses to detailed questions about their images also showed a high degree of similarity in their answers. Further examination of subjects' imagery protocols supports the idea that the conventional images and knowledge associated with idioms are constrained by the conceptual metaphors (e.g., the MIND IS A CONTAINER and IDEAS ARE ENTITIES) which motivate the figurative meanings of idioms. The results of two control studies showed that the conventional images associated with idioms are not solely based on their figurative meanings (Experiment 2) and that the images associated with literal phrases (e.g., spill the peas) were quite varied and unlikely to be constrained by conceptual metaphor (Experiment 3). These findings support the view that idioms are not \"dead\" metaphors with their meanings being arbitrarily determined. Rather, the meanings of many idioms are motivated by speakers' tacit knowledge of the conceptual metaphors underlying the meanings of these figurative phrases.",
"title": ""
},
{
"docid": "2575bad473ef55281db460617e0a37c8",
"text": "Automated license plate recognition (ALPR) has been applied to identify vehicles by their license plates and is critical in several important transportation applications. In order to achieve the recognition accuracy levels typically required in the market, it is necessary to obtain properly segmented characters. A standard method, projection-based segmentation, is challenged by substantial variation across the plate in the regions surrounding the characters. In this paper a reinforcement learning (RL) method is adapted to create a segmentation agent that can find appropriate segmentation paths that avoid characters, traversing from the top to the bottom of a cropped license plate image. Then a hybrid approach is proposed, leveraging the speed and simplicity of the projection-based segmentation technique along with the power of the RL method. The results of our experiments show significant improvement over the histogram projection currently used for character segmentation.",
"title": ""
},
{
"docid": "102f3b3d1b529c4d6f53a690f5940a25",
"text": "In this document, we first provide mathematical details regarding our optimization of the visible and hidden layers. In particular, we discuss the derivation of K−1 in Section 1.3 and provide the proximal operators for the hidden layer in Section 2. We further give the detailed derivation of the proximal operators for the hidden layer as an example in Section 2.1. A similar derivation can be used for the other proximal operators. We finally provide some additional details about our experiments.",
"title": ""
},
{
"docid": "7eeb2bf2aaca786299ebc8507482e109",
"text": "In this paper we argue that questionanswering (QA) over technical domains is distinctly different from TREC-based QA or Web-based QA and it cannot benefit from data-intensive approaches. Technical questions arise in situations where concrete problems require specific answers and explanations. Finding a justification of the answer in the context of the document is essential if we have to solve a real-world problem. We show that NLP techniques can be used successfully in technical domains for high-precision access to information stored in documents. We present ExtrAns, an answer extraction system over technical domains, its architecture, its use of logical forms for answer extractions and how terminology extraction becomes an important part of the system.",
"title": ""
},
{
"docid": "c0c30c3b9539511e9079ec7894ad754f",
"text": "Cardiovascular disease remains the world's leading cause of death. Yet, we have known for decades that the vast majority of atherosclerosis and its subsequent morbidity and mortality are influenced predominantly by diet. This paper will describe a health-promoting whole food, plant-based diet; delineate macro- and micro-nutrition, emphasizing specific geriatric concerns; and offer guidance to physicians and other healthcare practitioners to support patients in successfully utilizing nutrition to improve their health.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "7a86db2874d602e768d0641bb18ae0c3",
"text": "Most work in reinforcement learning (RL) is based on discounted techniques, such as Q learning, where long-term rewards are geometrically attenuated based on the delay in their occurence. Schwartz recently proposed an undiscounted RL technique called R learning that optimizes average reward, and argued that it was a better metric than the discounted one optimized by Q learning. In this paper we compare R learning with Q learning on a simulated robot box-pushing task. We compare these two techniques across three diierent exploration strategies: two of them undirected, Boltz-mann and semi-uniform, and one recency-based directed strategy. Our results show that Q learning performs better than R learning , even when both are evaluated using the same undiscounted performance measure. Furthermore, R learning appears to be very sensitive to choice of exploration strategy. In particular, a surprising result is that R learn-ing's performance noticeably deteriorates under Boltzmann exploration. We identify precisely a limit cycle situation that causes R learning's performance to deteriorate when combined with Boltzmann exploration, and show where such limit cycles arise in our robot task. However, R learning performs much better (although not as well as Q learning) when combined with semi-uniform and recency-based exploration. In this paper, we also argue for using medians over means as a better distribution-free estimator of average performance, and describe a simple non-parametric signiicance test for comparing learning data from two RL techniques.",
"title": ""
},
{
"docid": "f06a01dd29730e91e15d8c2e1d3b084a",
"text": "Recent years have seen the development of a multitude of tools for the security analysis of Android applications. A major deficit of current fully automated security analyses, however, is their inability to drive execution to interesting parts, such as where code is dynamically loaded or certain data is decrypted. In fact, security-critical or downright offensive code may not be reached at all by such analyses when dynamically checked conditions are not met by the analysis environment. To tackle this unsolved problem, we propose a tool combining static call path analysis with byte code instrumentation and a heuristic partial symbolic execution, which aims at executing interesting calls paths. It can systematically locate potentially security-critical code sections and instrument applications such that execution of these sections can be observed in a dynamic analysis. Among other use cases, this can be leveraged to force applications into revealing dynamically loaded code, a simple yet effective way to circumvent detection by security analysis software such as the Google Play Store's Bouncer. We illustrate the functionality of our tool by means of a simple logic bomb example and a real-life security vulnerability which is present in hunderd of apps and can still be actively exploited at this time.",
"title": ""
},
{
"docid": "eae9c6f8a3d50cb6e4b13afebdffb3ef",
"text": "The duration that exercise can be maintained decreases as the power requirements increase. In this review, we describe the power-duration (PD) relationship across the full range of attainable power outputs in humans. We show that a remarkably small range of power outputs is sustainable (power outputs below the critical power, CP). We also show that the origin of neuromuscular fatigue differs considerably depending on the exercise intensity domain in which exercise is performed. In the moderate domain (below the lactate threshold, LT), fatigue develops slowly and is predominantly of central origin (residing in the central nervous system). In the heavy domain (above LT but below CP), both central and peripheral (muscle) fatigue are observed. In this domain, fatigue is frequently correlated with the depletion of muscle glycogen. Severe-intensity exercise (above the CP) is associated with progressive derangements of muscle metabolic homeostasis and consequent peripheral fatigue. To counter these effects, muscle activity increases progressively, as does pulmonary oxygen uptake ([Formula: see text]), with task failure being associated with the attainment of [Formula: see text] max. Although the loss of homeostasis and thus fatigue develop more rapidly the higher the power output is above CP, the metabolic disturbance and the degree of peripheral fatigue reach similar values at task failure. We provide evidence that the failure to continue severe-intensity exercise is a physiological phenomenon involving multiple interacting mechanisms which indicate a mismatch between neuromuscular power demand and instantaneous power supply. Valid integrative models of fatigue must account for the PD relationship and its physiological basis.",
"title": ""
},
{
"docid": "66ad4513ed36329c299792ce35b2b299",
"text": "Reducing social uncertainty—understanding, predicting, and controlling the behavior of other people—is a central motivating force of human behavior. When rules and customs are not su4cient, people rely on trust and familiarity as primary mechanisms to reduce social uncertainty. The relative paucity of regulations and customs on the Internet makes consumer familiarity and trust especially important in the case of e-Commerce. Yet the lack of an interpersonal exchange and the one-time nature of the typical business transaction on the Internet make this kind of consumer trust unique, because trust relates to other people and is nourished through interactions with them. This study validates a four-dimensional scale of trust in the context of e-Products and revalidates it in the context of e-Services. The study then shows the in:uence of social presence on these dimensions of this trust, especially benevolence, and its ultimate contribution to online purchase intentions. ? 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b8d63090ea7d3302c71879ea4d11fde5",
"text": "We study the problem of how to distribute the training of large-scale deep learning models in the parallel computing environment. We propose a new distributed stochastic optimization method called Elastic Averaging SGD (EASGD). We analyze the convergence rate of the EASGD method in the synchronous scenario and compare its stability condition with the existing ADMM method in the round-robin scheme. An asynchronous and momentum variant of the EASGD method is applied to train deep convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Our approach accelerates the training and furthermore achieves better test accuracy. It also requires a much smaller amount of communication than other common baseline approaches such as the DOWNPOUR method. We then investigate the limit in speedup of the initial and the asymptotic phase of the mini-batch SGD, the momentum SGD, and the EASGD methods. We find that the spread of the input data distribution has a big impact on their initial convergence rate and stability region. We also find a surprising connection between the momentum SGD and the EASGD method with a negative moving average rate. A non-convex case is also studied to understand when EASGD can get trapped by a saddle point. Finally, we scale up the EASGD method by using a tree structured network topology. We show empirically its advantage and challenge. We also establish a connection between the EASGD and the DOWNPOUR method with the classical Jacobi and the Gauss-Seidel method, thus unifying a class of distributed stochastic optimization methods.",
"title": ""
},
{
"docid": "5fd35932b564437a7e29ca5a233828b8",
"text": "Pain is an unpleasant sensation associated with a wide range of injuries and diseases, and affects approximately 20% of adults in the world. The discovery of new and more effective drugs that can relieve pain is an important research goal in both the pharmaceutical industry and academia. This review describes studies involving antinociceptive activity of essential oils from 31 plant species. Botanical aspects of aromatic plants, mechanisms of action in pain models and chemical composition profiles of the essential oils are discussed. The data obtained in these studies demonstrate the analgesic potential of this group of natural products for therapeutic purposes.",
"title": ""
}
] | scidocsrr |
2dd593d54057504f3af12def3133b838 | The Effects of Interleaved Practice | [
{
"docid": "3ade96c73db1f06d7e0c1f48a0b33387",
"text": "To achieve enduring retention, people must usually study information on multiple occasions. How does the timing of study events affect retention? Prior research has examined this issue only in a spotty fashion, usually with very short time intervals. In a study aimed at characterizing spacing effects over significant durations, more than 1,350 individuals were taught a set of facts and--after a gap of up to 3.5 months--given a review. A final test was administered at a further delay of up to 1 year. At any given test delay, an increase in the interstudy gap at first increased, and then gradually reduced, final test performance. The optimal gap increased as test delay increased. However, when measured as a proportion of test delay, the optimal gap declined from about 20 to 40% of a 1-week test delay to about 5 to 10% of a 1-year test delay. The interaction of gap and test delay implies that many educational practices are highly inefficient.",
"title": ""
}
] | [
{
"docid": "5d44349955d07a212bc11f6edfaec8b0",
"text": "This investigation develops an innovative algorithm for multiple autonomous unmanned aerial vehicle (UAV) mission routing. The concept of a UAV Swarm Routing Problem (SRP) as a new combinatorics problem, is developed as a variant of the Vehicle Routing Problem with Time Windows (VRPTW). Solutions of SRP problem model result in route assignments per vehicle that successfully track to all targets, on time, within distance constraints. A complexity analysis and multi-objective formulation of the VRPTW indicates the necessity of a stochastic solution approach leading to a multi-objective evolutionary algorithm. A full problem definition of the SRP as well as a multi-objective formulation parallels that of the VRPTW method. Benchmark problems for the VRPTW are modified in order to create SRP benchmarks. The solutions show the SRP solutions are comparable or better than the same VRPTW solutions, while also representing a more realistic UAV swarm routing solution.",
"title": ""
},
{
"docid": "04edf5059bcaf3ed361ed65b8897ba8d",
"text": "The flying-capacitor (FC) topology is one of the more well-established ideas of multilevel conversion, typically applied as an inverter. One of the biggest advantages of the FC converter is the ability to naturally balance capacitor voltage. When natural balancing occurs neither measurements, nor additional control is needed to maintain required capacitors voltage sharing. However, in order to achieve natural voltage balancing suitable conditions must be achieved such as the topology, number of levels, modulation strategy as well as impedance of the output circuitry. Nevertheless this method is effectively applied in various classes of the converter such as inverters, multicell DC-DC, switch-mode DC-DC, AC-AC, as well as rectifiers. The next important issue related to the natural balancing process is its dynamics. Furthermore, in order to reinforce the balancing mechanism an auxiliary resonant balancing circuit is utilized in the converter which can also be critical in the AC-AC converters or switch mode DC-DC converters. This paper also presents an issue of choosing modulation strategy for the FC converter due to the fact that the natural balancing process is well-established for phase shifted PWM whilst other types of modulation can be more favorable for the power quality.",
"title": ""
},
{
"docid": "2ec3f8bc16c6d3dc8022309686c79f8d",
"text": "Manually re-drawing an image in a certain artistic style takes a professional artist a long time. Doing this for a video sequence single-handedly is beyond imagination. We present two computational approaches that transfer the style from one image (for example, a painting) to a whole video sequence. In our first approach, we adapt to videos the original image style transfer technique by Gatys et al. based on energy minimization. We introduce new ways of initialization and new loss functions to generate consistent and stable stylized video sequences even in cases with large motion and strong occlusion. Our second approach formulates video stylization as a learning problem. We propose a deep network architecture and training procedures that allow us to stylize arbitrary-length videos in a consistent and stable way, and nearly in real time. We show that the proposed methods clearly outperform simpler baselines both qualitatively and quantitatively. Finally, we propose a way to adapt these approaches also to 360$$^\\circ $$ ∘ images and videos as they emerge with recent virtual reality hardware.",
"title": ""
},
{
"docid": "c17522f4b9f3b229dae56b394adb69a1",
"text": "This paper investigates fault effects and error propagation in a FlexRay-based network with hybrid topology that includes a bus subnetwork and a star subnetwork. The investigation is based on about 43500 bit-flip fault injection inside different parts of the FlexRay communication controller. To do this, a FlexRay communication controller is modeled by Verilog HDL at the behavioral level. Then, this controller is exploited to setup a FlexRay-based network composed of eight nodes (four nodes in the bus subnetwork and four nodes in the star subnetwork). The faults are injected in a node of the bus subnetwork and a node of the star subnetwork of the hybrid network Then, the faults resulting in the three kinds of errors, namely, content errors, syntax errors and boundary violation errors are characterized. The results of fault injection show that boundary violation errors and content errors are negligibly propagated to the star subnetwork and syntax errors propagation is almost equal in the both bus and star subnetworks. Totally, the percentage of errors propagation in the bus subnetwork is more than the star subnetwork.",
"title": ""
},
{
"docid": "bc4d717db3b3470d7127590b8d165a5d",
"text": "In this paper, we develop a general formalism for describing the C++ programming language, and regular enough to cope with proposed extensions (such as concepts) for C++0x that affect its type system. Concepts are a mechanism for checking template arguments currently being developed to help cope with the massive use of templates in modern C++. The main challenges in developing a formalism for C++ are scoping, overriding, overloading, templates, specialization, and the C heritage exposed in the built-in types. Here, we primarily focus on templates and overloading.",
"title": ""
},
{
"docid": "962858b6cbb3ae5c95d0018075fd0060",
"text": "By 2010, the worldwide annual production of plastics will surpass 300 million tons. Plastics are indispensable materials in modern society, and many products manufactured from plastics are a boon to public health (e.g., disposable syringes, intravenous bags). However, plastics also pose health risks. Of principal concern are endocrine-disrupting properties, as triggered for example by bisphenol A and di-(2-ethylhexyl) phthalate (DEHP). Opinions on the safety of plastics vary widely, and despite more than five decades of research, scientific consensus on product safety is still elusive. This literature review summarizes information from more than 120 peer-reviewed publications on health effects of plastics and plasticizers in lab animals and humans. It examines problematic exposures of susceptible populations and also briefly summarizes adverse environmental impacts from plastic pollution. Ongoing efforts to steer human society toward resource conservation and sustainable consumption are discussed, including the concept of the 5 Rs--i.e., reduce, reuse, recycle, rethink, restrain--for minimizing pre- and postnatal exposures to potentially harmful components of plastics.",
"title": ""
},
{
"docid": "c898f6186ff15dff41dcb7b3376b975d",
"text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.",
"title": ""
},
{
"docid": "912c213d76bed8d90f636ea5a6220cf1",
"text": "Across the world, organizations have teams gathering threat data to protect themselves from incoming cyber attacks and maintain a strong cyber security posture. Teams are also sharing information, because along with the data collected internally, organizations need external information to have a comprehensive view of the threat landscape. The information about cyber threats comes from a variety of sources, including sharing communities, open-source and commercial sources, and it spans many different levels and timescales. Immediately actionable information are often low-level indicators of compromise, such as known malware hash values or command-and-control IP addresses, where an actionable response can be executed automatically by a system. Threat intelligence refers to more complex cyber threat information that has been acquired or inferred through the analysis of existing information. Information such as the different malware families used over time with an attack or the network of threat actors involved in an attack, is valuable information and can be vital to understanding and predicting attacks, threat developments, as well as informing law enforcement investigations. This information is also actionable, but on a longer time scale. Moreover, it requires action and decision-making at the human level. There is a need for effective intelligence management platforms to facilitate the generation, refinement, and vetting of data, post sharing. In designing such a system, some of the key challenges that exist include: working with multiple intelligence sources, combining and enriching data for greater intelligence, determining intelligence relevance based on technical constructs, and organizational input, delivery into organizational workflows and into technological products. This paper discusses these challenges encountered and summarizes the community requirements and expectations for an all-encompassing Threat Intelligence Management Platform. The requirements expressed in this paper, when implemented, will serve as building blocks to create systems that can maximize value out of a set of collected intelligence and translate those findings into action for a broad range of stakeholders.",
"title": ""
},
{
"docid": "e9ac1d4fa99e1150a7800471f4f0f73f",
"text": "We present a novel system for automatically generating immersive and interactive virtual reality (VR) environments using the real world as a template. The system captures indoor scenes in 3D, detects obstacles like furniture and walls, and maps walkable areas (WA) to enable real-walking in the generated virtual environment (VE). Depth data is additionally used for recognizing and tracking objects during the VR experience. The detected objects are paired with virtual counterparts to leverage the physicality of the real world for a tactile experience. Our approach is new, in that it allows a casual user to easily create virtual reality worlds in any indoor space of arbitrary size and shape without requiring specialized equipment or training. We demonstrate our approach through a fully working system implemented on the Google Project Tango tablet device.",
"title": ""
},
{
"docid": "065b0af0f1ed195ac90fa3ad041fa4c4",
"text": "We present CapWidgets, passive tangible controls for capacitive touch screens. CapWidgets bring back physical controls to off-the-shelf multi-touch surfaces as found in mobile phones and tablet computers. While the user touches the widget, the surface detects the capacitive marker on the widget's underside. We study the relative performance of this tangible interaction with direct multi-touch interaction and our experimental results show that user performance and preferences are not automatically in favor of tangible widgets and careful design is necessary to validate their properties.",
"title": ""
},
{
"docid": "1d0baee6485920d98492ed25003fc20e",
"text": "Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the minibatch setting that is often used in practice. Our main contribution is to introduce an accelerated minibatch version of SDCA and prove a fast convergence rate for this method. We discuss an implementation of our method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic gradient descent method of Nesterov [2007].",
"title": ""
},
{
"docid": "9a2e7daf5800cb5ad78646036ee205f0",
"text": "In this paper, we show that through self-interaction and self-observation, an anthropomorphic robot equipped with a range camera can learn object affordances and use this knowledge for planning. In the first step of learning, the robot discovers commonalities in its action-effect experiences by discovering effect categories. Once the effect categories are discovered, in the second step, affordance predictors for each behavior are obtained by learning the mapping from the object features to the effect categories. After learning, the robot can make plans to achieve desired goals, emulate end states of demonstrated actions, monitor the plan execution and take corrective actions using the perceptual structures employed or discovered during learning. We argue that the learning system proposed shares crucial elements with the development of infants of 7-10 months age, who explore the environment and learn the dynamics of the objects through goal-free exploration. In addition, we discuss goal-emulation and planning in relation to older infants with no symbolic inference capability and non-linguistic animals which utilize object affordances to make action plans.",
"title": ""
},
{
"docid": "c53e0a1762e4b69a2b9e5520e3e0bbfe",
"text": "Conventional public key infrastructure (PKI) designs are not optimal and contain security flaws; there is much work underway in improving PKI. The properties given by the Bitcoin blockchain and its derivatives are a natural solution to some of the problems with PKI in particular, certificate transparency and elimination of single points of failure. Recently-proposed blockchain PKI designs are built as public ledgers linking identity with public key, giving no provision of privacy. We consider the suitability of a blockchain-based PKI for contexts in which PKI is required, but in which linking of identity with public key is undesirable; specifically, we show that blockchain can be used to construct a privacy-aware PKI while simultaneously eliminating some of the problems encountered in conventional PKI.",
"title": ""
},
{
"docid": "e0ec89c103aedb1d04fbc5892df288a8",
"text": "This paper compares the computational performances of four model order reduction methods applied to large-scale electric power RLC networks transfer functions with many resonant peaks. Two of these methods require the state-space or descriptor model of the system, while the third requires only its frequency response data. The fourth method is proposed in this paper, being a combination of two of the previous methods. The methods were assessed for their ability to reduce eight test systems, either of the single-input single-output (SISO) or multiple-input multiple-output (MIMO) type. The results indicate that the reduced models obtained, of much smaller dimension, reproduce the dynamic behaviors of the original test systems over an ample range of frequencies with high accuracy.",
"title": ""
},
{
"docid": "869ad7b6bf74f283c8402958a6814a21",
"text": "In this paper, we make a move to build a dialogue system for automatic diagnosis. We first build a dataset collected from an online medical forum by extracting symptoms from both patients’ self-reports and conversational data between patients and doctors. Then we propose a taskoriented dialogue system framework to make the diagnosis for patients automatically, which can converse with patients to collect additional symptoms beyond their self-reports. Experimental results on our dataset show that additional symptoms extracted from conversation can greatly improve the accuracy for disease identification and our dialogue system is able to collect these symptoms automatically and make a better diagnosis.",
"title": ""
},
{
"docid": "7647993815a13899e60fdc17f91e270d",
"text": "of Dissertation presented to COPPE/UFRJ as a partial fulfillment of the requirements for the degree of Master of Science (M.Sc.) WHEN AUTOENCODERS MEET RECOMMENDER SYSTEMS: COFILS APPROACH Julio César Barbieri Gonzalez de Almeida",
"title": ""
},
{
"docid": "5df4c47f9b1d1bffe19a622e9e3147ac",
"text": "Regeneration of load-bearing segmental bone defects is a major challenge in trauma and orthopaedic surgery. The ideal bone graft substitute is a biomaterial that provides immediate mechanical stability, while stimulating bone regeneration to completely bridge defects over a short period. Therefore, selective laser melted porous titanium, designed and fine-tuned to tolerate full load-bearing, was filled with a physiologically concentrated fibrin gel loaded with bone morphogenetic protein-2 (BMP-2). This biomaterial was used to graft critical-sized segmental femoral bone defects in rats. As a control, porous titanium implants were either left empty or filled with a fibrin gels without BMP-2. We evaluated bone regeneration, bone quality and mechanical strength of grafted femora using in vivo and ex vivo µCT scanning, histology, and torsion testing. This biomaterial completely regenerated and bridged the critical-sized bone defects within eight weeks. After twelve weeks, femora were anatomically re-shaped and revealed open medullary cavities. More importantly, new bone was formed throughout the entire porous titanium implants and grafted femora regained more than their innate mechanical stability: torsional strength exceeded twice their original strength. In conclusion, combining porous titanium implants with a physiologically concentrated fibrin gels loaded with BMP-2 improved bone regeneration in load-bearing segmental defects. This material combination now awaits its evaluation in larger animal models to show its suitability for grafting load-bearing defects in trauma and orthopaedic surgery.",
"title": ""
},
{
"docid": "6b58567286efcb6ac857b7ef778a6e40",
"text": "Goal: Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Methods: Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. Results: This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Conclusion: Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. Significance: This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.",
"title": ""
},
{
"docid": "1e30d2f8e11bfbd868fdd0dfc0ea4179",
"text": "In this paper, I study how companies can use their personnel data and information from job satisfaction surveys to predict employee quits. An important issue discussed at length in the paper is how employers can ensure the anonymity of employees in surveys used for management and HR analytics. I argue that a simple mechanism where the company delegates the implementation of job satisfaction surveys to an external consulting company can be optimal. In the subsequent empirical analysis, I use a unique combination of firm-level data (personnel records) and information from job satisfaction surveys to assess the benefits for companies using data in their decision-making. Moreover, I show how companies can move from a descriptive to a predictive approach.",
"title": ""
},
{
"docid": "bd80596e80eab8a08ec5bf7afe49f46d",
"text": "What aspects of movement are represented in the primary motor cortex (M1): relatively low-level parameters like muscle force, or more abstract parameters like handpath? To examine this issue, the activity of neurons in M1 was recorded in a monkey trained to perform a task that dissociates three major variables of wrist movement: muscle activity, direction of movement at the wrist joint, and direction of movement in space. A substantial group of neurons in M1 (28 out of 88) displayed changes in activity that were muscle-like. Unexpectedly, an even larger group of neurons in M1 (44 out of 88) displayed changes in activity that were related to the direction of wrist movement in space independent of the pattern of muscle activity that generated the movement. Thus, both \"muscles\" and \"movements\" appear to be strongly represented in M1.",
"title": ""
}
] | scidocsrr |
14aaebf21720dc0e75f06d636974de7f | SMARTbot: A Behavioral Analysis Framework Augmented with Machine Learning to Identify Mobile Botnet Applications | [
{
"docid": "5a392f4c9779c06f700e2ff004197de9",
"text": "Breiman's bagging and Freund and Schapire's boosting are recent methods for improving the predictive power of classiier learning systems. Both form a set of classiiers that are combined by v oting, bagging by generating replicated boot-strap samples of the data, and boosting by adjusting the weights of training instances. This paper reports results of applying both techniques to a system that learns decision trees and testing on a representative collection of datasets. While both approaches substantially improve predictive accuracy, boosting shows the greater beneet. On the other hand, boosting also produces severe degradation on some datasets. A small change to the way that boosting combines the votes of learned classiiers reduces this downside and also leads to slightly better results on most of the datasets considered.",
"title": ""
},
{
"docid": "3e26fe227e8c270fda4fe0b7d09b2985",
"text": "With the recent emergence of mobile platforms capable of executing increasingly complex software and the rising ubiquity of using mobile platforms in sensitive applications such as banking, there is a rising danger associated with malware targeted at mobile devices. The problem of detecting such malware presents unique challenges due to the limited resources avalible and limited privileges granted to the user, but also presents unique opportunity in the required metadata attached to each application. In this article, we present a machine learning-based system for the detection of malware on Android devices. Our system extracts a number of features and trains a One-Class Support Vector Machine in an offline (off-device) manner, in order to leverage the higher computing power of a server or cluster of servers.",
"title": ""
},
{
"docid": "2f2291baa6c8a74744a16f27df7231d2",
"text": "Malicious programs, such as viruses and worms, are frequently related to previous programs through evolutionary relationships. Discovering those relationships and constructing a phylogeny model is expected to be helpful for analyzing new malware and for establishing a principled naming scheme. Matching permutations of code may help build better models in cases where malware evolution does not keep things in the same order. We describe methods for constructing phylogeny models that uses features called n-perms to match possibly permuted codes. An experiment was performed to compare the relative effectiveness of vector similarity measures using n-perms and n-grams when comparing permuted variants of programs. The similarity measures using n-perms maintained a greater separation between the similarity scores of permuted families of specimens versus unrelated specimens. A subsequent study using a tree generated through n-perms suggests that phylogeny models based on n-perms may help forensic analysts investigate new specimens, and assist in reconciling malware naming inconsistencies Škodlivé programy, jako viry a červy (malware), jsou zřídka psány narychlo, jen tak. Obvykle jsou výsledkem svých evolučních vztahů. Zjištěním těchto vztahů a tvorby v přesné fylogenezi se předpokládá užitečná pomoc v analýze nového malware a ve vytvoření zásad pojmenovacího schématu. Porovnávání permutací kódu uvnitř malware mů že nabídnout výhody pro fylogenní generování, protože evoluční kroky implementované autory malware nemohou uchovat posloupnosti ve sdíleném kódu. Popisujeme rodinu fylogenních generátorů, které provádějí clustering pomocí PQ stromově založených extrakčních vlastností. Byl vykonán experiment v němž výstup stromu z těchto generátorů byl vyhodnocen vzhledem k fylogenezím generovaným pomocí vážených n-gramů. Výsledky ukazují výhody přístupu založeného na permutacích ve fylogenním generování malware. Les codes malveillants, tels que les virus et les vers, sont rarement écrits de zéro; en conséquence, il existe des relations de nature évolutive entre ces différents codes. Etablir ces relations et construire une phylogénie précise permet d’espérer une meilleure capacité d’analyse de nouveaux codes malveillants et de disposer d’une méthode de fait de nommage de ces codes. La concordance de permutations de code avec des parties de codes malveillants sont susceptibles d’être très intéressante dans l’établissement d’une phylogénie, dans la mesure où les étapes évolutives réalisées par les auteurs de codes malveillants ne conservent généralement pas l’ordre des instructions présentes dans le code commun. Nous décrivons ici une famille de générateurs phylogénétiques réalisant des regroupements à l’aide de caractéristiques extraites d’arbres PQ. Une expérience a été réalisée, dans laquelle l’arbre produit par ces générateurs est évalué d’une part en le comparant avec les classificiations de références utilisées par les antivirus par scannage, et d’autre part en le comparant aux phylogénies produites à l’aide de polygrammes de taille n (n-grammes), pondérés. Les résultats démontrent l’intérêt de l’approche utilisant les permutations dans la génération phylogénétique des codes malveillants. Haitalliset ohjelmat, kuten tietokonevirukset ja -madot, kirjoitetaan harvoin alusta alkaen. Tämän seurauksena niistä on löydettävissä evoluution kaltaista samankaltaisuutta. Samankaltaisuuksien löytämisellä sekä rakentamalla tarkka evoluutioon perustuva malli voidaan helpottaa uusien haitallisten ohjelmien analysointia sekä toteuttaa nimeämiskäytäntöjä. Permutaatioiden etsiminen koodista saattaa antaa etuja evoluutiomallin muodostamiseen, koska haitallisten ohjelmien kirjoittajien evolutionääriset askeleet eivät välttämättä säilytä jaksoittaisuutta ohjelmakoodissa. Kuvaamme joukon evoluutiomallin muodostajia, jotka toteuttavat klusterionnin käyttämällä PQ-puuhun perustuvia ominaisuuksia. Teimme myös kokeen, jossa puun tulosjoukkoa verrattiin virustentorjuntaohjelman muodostamaan viitejoukkoon sekä evoluutiomalleihin, jotka oli muodostettu painotetuilla n-grammeilla. Tulokset viittaavat siihen, että permutaatioon perustuvaa lähestymistapaa voidaan menestyksekkäästi käyttää evoluutiomallien muodostamineen. Maliziöse Programme, wie z.B. Viren und Würmer, werden nur in den seltensten Fällen komplett neu geschrieben; als Ergebnis können zwischen verschiedenen maliziösen Codes Abhängigkeiten gefunden werden. Im Hinblick auf Klassifizierung und wissenschaftlichen Aufarbeitung neuer maliziöser Codes kann es sehr hilfreich erweisen, Abhängigkeiten zu bestehenden maliziösen Codes darzulegen und somit einen Stammbaum zu erstellen. In dem Artikel wird u.a. auf moderne Ansätze innerhalb der Staumbaumgenerierung anhand ausgewählter Win32 Viren eingegangen. I programmi maligni, quali virus e worm, sono raramente scritti da zero; questo significa che vi sono delle relazioni di evoluzione tra di loro. Scoprire queste relazioni e costruire una filogenia accurata puo’aiutare sia nell’analisi di nuovi programmi di questo tipo, sia per stabilire una nomenclatura avente una base solida. Cercare permutazioni di codice tra vari programmi puo’ dare un vantaggio per la generazione delle filogenie, dal momento che i passaggi evolutivi implementati dagli autori possono non aver preservato la sequenzialita’ del codice originario. In questo articolo descriviamo una famiglia di generatori di filogenie che effettuano clustering usando feature basate su alberi PQ. In un esperimento l’albero di output dei generatori viene confrontato con una classificazione di rifetimento ottenuta da un programma anti-virus, e con delle filogenie generate usando n-grammi pesati. I risultati indicano i risultati positivi dell’approccio basato su permutazioni nella generazione delle filogenie del malware. ",
"title": ""
},
{
"docid": "2e12a5f308472f3f4d19d4399dc85546",
"text": "This paper presents a taxonomy of replay attacks on cryptographic protocols in terms of message origin and destination. The taxonomy is independent of any method used to analyze or prevent such attacks. It is also complete in the sense that any replay attack is composed entirely of elements classi ed by the taxonomy. The classi cation of attacks is illustrated using both new and previously known attacks on protocols. The taxonomy is also used to discuss the appropriateness of particular countermeasures and protocol analysis methods to particular kinds of replays.",
"title": ""
},
{
"docid": "3cae5c0440536b95cf1d0273071ad046",
"text": "Android platform adopts permissions to protect sensitive resources from untrusted apps. However, after permissions are granted by users at install time, apps could use these permissions (sensitive resources) with no further restrictions. Thus, recent years have witnessed the explosion of undesirable behaviors in Android apps. An important part in the defense is the accurate analysis of Android apps. However, traditional syscall-based analysis techniques are not well-suited for Android, because they could not capture critical interactions between the application and the Android system.\n This paper presents VetDroid, a dynamic analysis platform for reconstructing sensitive behaviors in Android apps from a novel permission use perspective. VetDroid features a systematic framework to effectively construct permission use behaviors, i.e., how applications use permissions to access (sensitive) system resources, and how these acquired permission-sensitive resources are further utilized by the application. With permission use behaviors, security analysts can easily examine the internal sensitive behaviors of an app. Using real-world Android malware, we show that VetDroid can clearly reconstruct fine-grained malicious behaviors to ease malware analysis. We further apply VetDroid to 1,249 top free apps in Google Play. VetDroid can assist in finding more information leaks than TaintDroid, a state-of-the-art technique. In addition, we show how we can use VetDroid to analyze fine-grained causes of information leaks that TaintDroid cannot reveal. Finally, we show that VetDroid can help identify subtle vulnerabilities in some (top free) applications otherwise hard to detect.",
"title": ""
}
] | [
{
"docid": "d0486fc1c105cd3e13ca855221462973",
"text": "Automatic segmentation of an organ and its cystic region is a prerequisite of computer-aided diagnosis. In this paper, we focus on pancreatic cyst segmentation in abdominal CT scan. This task is important and very useful in clinical practice yet challenging due to the low contrast in boundary, the variability in location, shape and the different stages of the pancreatic cancer. Inspired by the high relevance between the location of a pancreas and its cystic region, we introduce extra deep supervision into the segmentation network, so that cyst segmentation can be improved with the help of relatively easier pancreas segmentation. Under a reasonable transformation function, our approach can be factorized into two stages, and each stage can be efficiently optimized via gradient back-propagation throughout the deep networks. We collect a new dataset with 131 pathological samples, which, to the best of our knowledge, is the largest set for pancreatic cyst segmentation. Without human assistance, our approach reports a 63.44% average accuracy, measured by the Dice-Sørensen coefficient (DSC), which is higher than the number (60.46%) without deep supervision.",
"title": ""
},
{
"docid": "0243035834fcce312f7cb1d87ef5c71b",
"text": "This work develops a representation learning method for bipartite networks. While existing works have developed various embedding methods for network data, they have primarily focused on homogeneous networks in general and overlooked the special properties of bipartite networks. As such, these methods can be suboptimal for embedding bipartite networks. In this paper, we propose a new method named BiNE, short for Bipartite Network Embedding, to learn the vertex representations for bipartite networks. By performing biased random walks purposefully, we generate vertex sequences that can well preserve the long-tail distribution of vertices in the original bipartite network. We then propose a novel optimization framework by accounting for both the explicit relations (i.e., observed links) and implicit relations (i.e., unobserved but transitive links) in learning the vertex representations. We conduct extensive experiments on several real datasets covering the tasks of link prediction (classification), recommendation (personalized ranking), and visualization. Both quantitative results and qualitative analysis verify the effectiveness and rationality of our BiNE method.",
"title": ""
},
{
"docid": "fc29f8e0d932140b5f48b35e4175b51a",
"text": "A three-dimensional (3D) geometric model obtained from a 3D device or other approaches is not necessarily watertight due to the presence of geometric deficiencies. These inadequacies must be repaired to create a valid surface mesh on the model as a pre-process of computational engineering analyses. This procedure has been a tedious and labor-intensive step, as there are many kinds of deficiencies that can make the geometry to be nonwatertight, such as gaps and holes. It is still challenging to repair discrete surface models based on available geometric information. The focus of this paper is to develop a new automated method for patching holes on the surface models in order to achieve watertightness. It describes a numerical algorithm utilizing Non-Uniform Rational B-Splines (NURBS) surfaces to generate smooth triangulated surface patches for topologically simple holes on discrete surface models. The Delaunay criterion for point insertion and edge swapping is used in this algorithm to improve the outcome. Surface patches are generated based on existing points surrounding the holes without altering them. The watertight geometry produced can be used in a wide range of engineering applications in the field of computational engineering simulation studies.",
"title": ""
},
{
"docid": "4932cb674e281098a5ef8007d3e37032",
"text": "We present Sparse Non-negative Matrix (SNM) estimation, a novel probability estimation technique for language modeling that can efficiently incorporate arbitrary features. We evaluate SNM language models on two corpora: the One Billion Word Benchmark and a subset of the LDC English Gigaword corpus. Results show that SNM language models trained with n-gram features are a close match for the well-established Kneser-Ney models. The addition of skip-gram features yields a model that is in the same league as the state-of-the-art recurrent neural network language models, as well as complementary: combining the two modeling techniques yields the best known result on the One Billion Word Benchmark. On the Gigaword corpus further improvements are observed using features that cross sentence boundaries. The computational advantages of SNM estimation over both maximum entropy and neural network estimation are probably its main strength, promising an approach that has large flexibility in combining arbitrary features and yet scales gracefully to large amounts of data.",
"title": ""
},
{
"docid": "8bbbaab2cf7825ca98937de14908e655",
"text": "Software Reliability Model is categorized into two, one is static model and the other one is dynamic model. Dynamic models observe the temporary behavior of debugging process during testing phase. In Static Models, modeling and analysis of program logic is done on the same code. A Model which describes about error detection in software Reliability is called Software Reliability Growth Model. This paper reviews various existing software reliability models and there failure intensity function and the mean value function. On the basis of this review a model is proposed for the software reliability having different mean value function and failure intensity function.",
"title": ""
},
{
"docid": "8035245f1aa7edebd74e39332bdef3c9",
"text": "In order to develop theory any community of scientists must agree as to what constitutes its phenomena of interest. A distinction is made between phenomena of interest and exemplars. The concept \"prevention\" is viewed as an exemplar, whereas the concept \"empowerment\" is suggested as a leading candidate for the title \"phenomena of interest\" to Community Psychology. The ecological nature of empowerment theory is described, and some of the terms of empowerment (definitions, conditions, and periods of time) are explicated. Eleven assumptions, presuppositions, and hypotheses are offered as guidelines for theory development and empirical study.",
"title": ""
},
{
"docid": "b687ad05040b3df09a9a6381f7e34d04",
"text": "ÐThe research topic of looking at people, that is, giving machines the ability to detect, track, and identify people and more generally, to interpret human behavior, has become a central topic in machine vision research. Initially thought to be the research problem that would be hardest to solve, it has proven remarkably tractable and has even spawned several thriving commercial enterprises. The principle driving application for this technology is afourth generationo embedded computing: asmarto' environments and portable or wearable devices. The key technical goals are to determine the computer's context with respect to nearby humans (e.g., who, what, when, where, and why) so that the computer can act or respond appropriately without detailed instructions. This paper will examine the mathematical tools that have proven successful, provide a taxonomy of the problem domain, and then examine the stateof-the-art. Four areas will receive particular attention: person identification, surveillance/monitoring, 3D methods, and smart rooms/ perceptual user interfaces. Finally, the paper will discuss some of the research challenges and opportunities. Index TermsÐLooking at people, face recognition, gesture recognition, visual interface, appearance-based vision, wearable computing, ubiquitious.",
"title": ""
},
{
"docid": "e6bbe7de06295817435acafbbb7470cc",
"text": "Cortical circuits work through the generation of coordinated, large-scale activity patterns. In sensory systems, the onset of a discrete stimulus usually evokes a temporally organized packet of population activity lasting ∼50–200 ms. The structure of these packets is partially stereotypical, and variation in the exact timing and number of spikes within a packet conveys information about the identity of the stimulus. Similar packets also occur during ongoing stimuli and spontaneously. We suggest that such packets constitute the basic building blocks of cortical coding.",
"title": ""
},
{
"docid": "92ac3bfdcf5e554152c4ce2e26b77315",
"text": "How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.",
"title": ""
},
{
"docid": "5b56288bb7b49f18148f28798cfd8129",
"text": "According to World Health Organization (WHO) estimations, one out of five adults worldwide will be obese by 2025. Worldwide obesity has doubled since 1980. In fact, more than 1.9 billion adults (39%) of 18 years and older were overweight and over 600 million (13%) of these were obese in 2014. 42 million children under the age of five were overweight or obese in 2014. Obesity is a top public health problem due to its associated morbidity and mortality. This paper reviews the main techniques to measure the level of obesity and body fat percentage, and explains the complications that can carry to the individual's quality of life, longevity and the significant cost of healthcare systems. Researchers and developers are adapting the existing technology, as intelligent phones or some wearable gadgets to be used for controlling obesity. They include the promoting of healthy eating culture and adopting the physical activity lifestyle. The paper also shows a comprehensive study of the most used mobile applications and Wireless Body Area Networks focused on controlling the obesity and overweight. Finally, this paper proposes an intelligent architecture that takes into account both, physiological and cognitive aspects to reduce the degree of obesity and overweight.",
"title": ""
},
{
"docid": "72e9e772ede3d757122997d525d0f79c",
"text": "Deep learning systems, such as Convolutional Neural Networks (CNNs), can infer a hierarchical representation of input data that facilitates categorization. In this paper, we propose to learn affect-salient features for Speech Emotion Recognition (SER) using semi-CNN. The training of semi-CNN has two stages. In the first stage, unlabeled samples are used to learn candidate features by contractive convolutional neural network with reconstruction penalization. The candidate features, in the second step, are used as the input to semi-CNN to learn affect-salient, discriminative features using a novel objective function that encourages the feature saliency, orthogonality and discrimination. Our experiment results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and environment distortion), and outperforms several well-established SER features.",
"title": ""
},
{
"docid": "ab4a788fd82d5953e22032b1361328c2",
"text": "To recognize application of Artificial Neural Networks (ANNs) in weather forecasting, especially in rainfall forecasting a comprehensive literature review from 1923 to 2012 is done and presented in this paper. And it is found that architectures of ANN such as BPN, RBFN is best established to be forecast chaotic behavior and have efficient enough to forecast monsoon rainfall as well as other weather parameter prediction phenomenon over the smaller geographical region.",
"title": ""
},
{
"docid": "d24980c1a1317c8dd055741da1b8c7a7",
"text": "Influence Maximization (IM), which selects a set of <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math> <alternatives><inline-graphic xlink:href=\"li-ieq1-2807843.gif\"/></alternatives></inline-formula> users (called seed set) from a social network to maximize the expected number of influenced users (called influence spread), is a key algorithmic problem in social influence analysis. Due to its immense application potential and enormous technical challenges, IM has been extensively studied in the past decade. In this paper, we survey and synthesize a wide spectrum of existing studies on IM from an <italic>algorithmic perspective</italic>, with a special focus on the following key aspects: (1) a review of well-accepted diffusion models that capture the information diffusion process and build the foundation of the IM problem, (2) a fine-grained taxonomy to classify existing IM algorithms based on their design objectives, (3) a rigorous theoretical comparison of existing IM algorithms, and (4) a comprehensive study on the applications of IM techniques in combining with novel context features of social networks such as topic, location, and time. Based on this analysis, we then outline the key challenges and research directions to expand the boundary of IM research.",
"title": ""
},
{
"docid": "f04efdcb31c3ec070ad0c50737c3eb2b",
"text": "Previous works on image emotion analysis mainly focused on predicting the dominant emotion category or the average dimension values of an image for affective image classification and regression. However, this is often insufficient in various real-world applications, as the emotions that are evoked in viewers by an image are highly subjective and different. In this paper, we propose to predict the continuous probability distribution of image emotions which are represented in dimensional valence-arousal space. We carried out large-scale statistical analysis on the constructed Image-Emotion-Social-Net dataset, on which we observed that the emotion distribution can be well-modeled by a Gaussian mixture model. This model is estimated by an expectation-maximization algorithm with specified initializations. Then, we extract commonly used emotion features at different levels for each image. Finally, we formalize the emotion distribution prediction task as a shared sparse regression (SSR) problem and extend it to multitask settings, named multitask shared sparse regression (MTSSR), to explore the latent information between different prediction tasks. SSR and MTSSR are optimized by iteratively reweighted least squares. Experiments are conducted on the Image-Emotion-Social-Net dataset with comparisons to three alternative baselines. The quantitative results demonstrate the superiority of the proposed method.",
"title": ""
},
{
"docid": "baa5eff969c4c81c863ec4c4c6ce7734",
"text": "The research describes a rapid method for the determination of fatty acid (FA) contents in a micro-encapsulated fish-oil (μEFO) supplement by using attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopic technique and partial least square regression (PLSR) analysis. Using the ATR-FTIR technique, the μEFO powder samples can be directly analysed without any pre-treatment required, and our developed PLSR strategic approach based on the acquired spectral data led to production of a good linear calibration with R(2)=0.99. In addition, the subsequent predictions acquired from an independent validation set for the target FA compositions (i.e., total oil, total omega-3 fatty acids, EPA and DHA) were highly accurate when compared to the actual values obtained from standard GC-based technique, with plots between predicted versus actual values resulting in excellent linear fitting (R(2)≥0.96) in all cases. The study therefore demonstrated not only the substantial advantage of the ATR-FTIR technique in terms of rapidness and cost effectiveness, but also its potential application as a rapid, potentially automated, online monitoring technique for the routine analysis of FA composition in industrial processes when used together with the multivariate data analysis modelling.",
"title": ""
},
{
"docid": "4bfc1e2fbb2b1dea29360c410e5258b4",
"text": "Fault tolerance is gaining interest as a means to increase the reliability and availability of distributed energy systems. In this paper, a voltage-oriented doubly fed induction generator, which is often used in wind turbines, is examined. Furthermore, current, voltage, and position sensor fault detection, isolation, and reconfiguration are presented. Machine operation is not interrupted. A bank of observers provides residuals for fault detection and replacement signals for the reconfiguration. Control is temporarily switched from closed loop into open-loop to decouple the drive from faulty sensor readings. During a short period of open-loop operation, the fault is isolated using parity equations. Replacement signals from observers are used to reconfigure the drive and reenter closed-loop control. There are no large transients in the current. Measurement results and stability analysis show good results.",
"title": ""
},
{
"docid": "1e5ebd122bee855d7e8113d5fe71202d",
"text": "We derive the general expression of the anisotropic magnetoresistance (AMR) ratio of ferromagnets for a relative angle between the magnetization direction and the current direction. We here use the two-current model for a system consisting of a spin-polarized conduction state (s) and localized d states (d) with spin-orbit interaction. Using the expression, we analyze the AMR ratios of Ni and a half-metallic ferromagnet. These results correspond well to the respective experimental results. In addition, we give an intuitive explanation about a relation between the sign of the AMR ratio and the s-d scattering process. Introduction The anisotropic magnetoresistance (AMR) effect, in which the electrical resistivity depends on a relative angle θ between the magnetization (Mex) direction and the electric current (I) direction, has been studied extensively both experimentally [1-5] and theoretically [1,6]. The AMR ratio is often defined by ( ) ( ) ρ θ ρ θ ρ ρ ρ ⊥",
"title": ""
},
{
"docid": "8c4540f3724dab3a173e94bdba7b0999",
"text": "The significant growth of the Internet of Things (IoT) is revolutionizing the way people live by transforming everyday Internet-enabled objects into an interconnected ecosystem of digital and personal information accessible anytime and anywhere. As more objects become Internet-enabled, the security and privacy of the personal information generated, processed and stored by IoT devices become complex and challenging to manage. This paper details the current security and privacy challenges presented by the increasing use of the IoT. Furthermore, investigate and analyze the limitations of the existing solutions with regard to addressing security and privacy challenges in IoT and propose a possible solution to address these challenges. The results of this proposed solution could be implemented during the IoT design, building, testing and deployment phases in the real-life environments to minimize the security and privacy challenges associated with IoT.",
"title": ""
}
] | scidocsrr |
3b854c906d0e8815a54e74071e004340 | Generic Physiological Features as Predictors of Player Experience | [
{
"docid": "72e4d7729031d63f96b686444c9b446e",
"text": "In this paper we describe the fundamentals of affective gaming from a physiological point of view, covering some of the origins of the genre, how affective videogames operate and current conceptual and technological capabilities. We ground this overview of the ongoing research by taking an in-depth look at one of our own early biofeedback-based affective games. Based on our analysis of existing videogames and our own experience with affective videogames, we propose a new approach to game design based on several high-level design heuristics: assist me, challenge me and emote me (ACE), a series of gameplay \"tweaks\" made possible through affective videogames.",
"title": ""
}
] | [
{
"docid": "6b622da925ead8c237518ab21fa3e85d",
"text": "Helpless children attribute their failures to lack of ability and view them as insurmountable. Mastery-oriented children, in contrast, tend to emphasize motivational factors and to view failure as surmountable. Although the performance of the two groups is usually identical during success of prior to failure, past research suggests that these groups may well differ in the degree to which they perceive that their successes are replicable and hence that their failures are avoidable. The present study was concerned with the nature of such differences. Children performed a task on which they encountered success and then failure. Half were asked a series of questions about their performance after success and half after failure. Striking differences emerged: Compared to mastery-oriented children, helpless children underestimated the number of success (and overestimated the number of failures), did not view successes as indicative of ability, and did not expect the successes to continue. subsequent failure led them to devalue ;their performance but left the mastery-oriented children undaunted. Thus, for helpless children, successes are less salient, less predictive, and less enduring--less successful.",
"title": ""
},
{
"docid": "e613ef418da545958c2094c5cce8f4f1",
"text": "This paper proposes a new visual SLAM technique that not only integrates 6 degrees of freedom (DOF) pose and dense structure but also simultaneously integrates the colour information contained in the images over time. This involves developing an inverse model for creating a super-resolution map from many low resolution images. Contrary to classic super-resolution techniques, this is achieved here by taking into account full 3D translation and rotation within a dense localisation and mapping framework. This not only allows to take into account the full range of image deformations but also allows to propose a novel criteria for combining the low resolution images together based on the difference in resolution between different images in 6D space. Another originality of the proposed approach with respect to the current state of the art lies in the minimisation of both colour (RGB) and depth (D) errors, whilst competing approaches only minimise geometry. Several results are given showing that this technique runs in real-time (30Hz) and is able to map large scale environments in high-resolution whilst simultaneously improving the accuracy and robustness of the tracking.",
"title": ""
},
{
"docid": "a334bfdcbaacf1cada20694e2e3dd867",
"text": "The oral bioavailability of diclofenac potassium 50 mg administered as a soft gelatin capsule (softgel capsule), powder for oral solution (oral solution), and tablet was evaluated in a randomized, open-label, 3-period, 6-sequence crossover study in healthy adults. Plasma diclofenac concentrations were measured using a validated liquid chromatography-mass spectrometry/mass spectrometry method, and pharmacokinetic analysis was performed by noncompartmental methods. The median time to achieve peak plasma concentrations of diclofenac was 0.5, 0.25, and 0.75 hours with the softgel capsule, oral solution, and tablet formulations, respectively. The geometric mean ratio and associated 90%CI for AUCinf, and Cmax of the softgel capsule formulation relative to the oral solution formulation were 0.97 (0.95-1.00) and 0.85 (0.76-0.95), respectively. The geometric mean ratio and associated 90%CI for AUCinf and Cmax of the softgel capsule formulation relative to the tablet formulation were 1.04 (1.00-1.08) and 1.67 (1.43-1.96), respectively. In conclusion, the exposure (AUC) of diclofenac with the new diclofenac potassium softgel capsule formulation was comparable to that of the existing oral solution and tablet formulations. The peak plasma concentration of diclofenac from the new softgel capsule was 67% higher than the existing tablet formulation, whereas it was 15% lower in comparison with the oral solution formulation.",
"title": ""
},
{
"docid": "d33aff7fc4923a7dc7521c2db56cb99e",
"text": "OBJECTIVE\nThis research was conducted to study the relationship between attribution and academic procrastination in University Students.\n\n\nMETHODS\nThe subjects were 203 undergraduate students, 55 males and 148 females, selected from English and French language and literature students of Tabriz University. Data were gathered through Procrastination Assessment Scale-student (PASS) and Causal Dimension Scale (CDA) and were analyzed by multiple regression analysis (stepwise).\n\n\nRESULTS\nThe results showed that there was a meaningful and negative relation between the locus of control and controllability in success context and academic procrastination. Besides, a meaningful and positive relation was observed between the locus of control and stability in failure context and procrastination. It was also found that 17% of the variance of procrastination was accounted by linear combination of attributions.\n\n\nCONCLUSION\nWe believe that causal attribution is a key in understanding procrastination in academic settings and is used by those who have the knowledge of Causal Attribution styles to organize their learning.",
"title": ""
},
{
"docid": "34461f38c51a270e2f3b0d8703474dfc",
"text": "Software vulnerabilities are the root cause of computer security problem. How people can quickly discover vulnerabilities existing in a certain software has always been the focus of information security field. This paper has done research on software vulnerability techniques, including static analysis, Fuzzing, penetration testing. Besides, the authors also take vulnerability discovery models as an example of software vulnerability analysis methods which go hand in hand with vulnerability discovery techniques. The ending part of the paper analyses the advantages and disadvantages of each technique introduced here and talks about the future direction of this field.",
"title": ""
},
{
"docid": "a078ace7b4093d10e4998667156c68bf",
"text": "In this study we develop a method which improves a credit card fraud detection solution currently being used in a bank. With this solution each transaction is scored and based on these scores the transactions are classified as fraudulent or legitimate. In fraud detection solutions the typical objective is to minimize the wrongly classified number of transactions. However, in reality, wrong classification of each transaction do not have the same effect in that if a card is in the hand of fraudsters its whole available limit is used up. Thus, the misclassification cost should be taken as the available limit of the card. This is what we aim at minimizing in this study. As for the solution method, we suggest a novel combination of the two well known meta-heuristic approaches, namely the genetic algorithms and the scatter search. The method is applied to real data and very successful results are obtained compared to current practice. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e032ace86d446b4ecacbda453913a373",
"text": "While neural machine translation (NMT) is making good progress in the past two years, tens of millions of bilingual sentence pairs are needed for its training. However, human labeling is very costly. To tackle this training data bottleneck, we develop a dual-learning mechanism, which can enable an NMT system to automatically learn from unlabeled data through a dual-learning game. This mechanism is inspired by the following observation: any machine translation task has a dual task, e.g., English-to-French translation (primal) versus French-to-English translation (dual); the primal and dual tasks can form a closed loop, and generate informative feedback signals to train the translation models, even if without the involvement of a human labeler. In the dual-learning mechanism, we use one agent to represent the model for the primal task and the other agent to represent the model for the dual task, then ask them to teach each other through a reinforcement learning process. Based on the feedback signals generated during this process (e.g., the languagemodel likelihood of the output of a model, and the reconstruction error of the original sentence after the primal and dual translations), we can iteratively update the two models until convergence (e.g., using the policy gradient methods). We call the corresponding approach to neural machine translation dual-NMT. Experiments show that dual-NMT works very well on English↔French translation; especially, by learning from monolingual data (with 10% bilingual data for warm start), it achieves a comparable accuracy to NMT trained from the full bilingual data for the French-to-English translation task.",
"title": ""
},
{
"docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21",
"text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.",
"title": ""
},
{
"docid": "10187e22397b1c30b497943764d32c34",
"text": "Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network.",
"title": ""
},
{
"docid": "586ea16456356b6301e18f39e50baa89",
"text": "In this paper we address the problem of migrating a legacy Web application to a cloud service. We develop a reusable architectural pattern to do so and validate it with a case study of the Beta release of the IBM Bluemix Workflow Service [1] (herein referred to as the Beta Workflow service). It uses Docker [2] containers and a Cloudant [3] persistence layer to deliver a multi-tenant cloud service by re-using a legacy codebase. We are not aware of any literature that addresses this problem by using containers.The Beta Workflow service provides a scalable, stateful, highly available engine to compose services with REST APIs. The composition is modeled as a graph but authored in a Javascript-based domain specific language that specifies a set of activities and control flow links among these activities. The primitive activities in the language can be used to respond to HTTP REST requests, invoke services with REST APIs, and execute Javascript code to, among other uses, extract and construct the data inputs and outputs to external services, and make calls to these services.Examples of workflows that have been built using the service include distributing surveys and coupons to customers of a retail store [1], the management of sales requests between a salesperson and their regional managers, managing the staged deployment of different versions of an application, and the coordinated transfer of jobs among case workers.",
"title": ""
},
{
"docid": "c27ba892408391234da524ffab0e7418",
"text": "Sunlight and skylight are rarely rendered correctly in computer graphics. A major reason for this is high computational expense. Another is that precise atmospheric data is rarely available. We present an inexpensive analytic model that approximates full spectrum daylight for various atmospheric conditions. These conditions are parameterized using terms that users can either measure or estimate. We also present an inexpensive analytic model that approximates the effects of atmosphere (aerial perspective). These models are fielded in a number of conditions and intermediate results verified against standard literature from atmospheric science. Our goal is to achieve as much accuracy as possible without sacrificing usability.",
"title": ""
},
{
"docid": "c716e7dc1c0e770001bcb57eab871968",
"text": "We present a new method to visualize from an ensemble of flow fields the statistical properties of streamlines passing through a selected location. We use principal component analysis to transform the set of streamlines into a low-dimensional Euclidean space. In this space the streamlines are clustered into major trends, and each cluster is in turn approximated by a multivariate Gaussian distribution. This yields a probabilistic mixture model for the streamline distribution, from which confidence regions can be derived in which the streamlines are most likely to reside. This is achieved by transforming the Gaussian random distributions from the low-dimensional Euclidean space into a streamline distribution that follows the statistical model, and by visualizing confidence regions in this distribution via iso-contours. We further make use of the principal component representation to introduce a new concept of streamline-median, based on existing median concepts in multidimensional Euclidean spaces. We demonstrate the potential of our method in a number of real-world examples, and we compare our results to alternative clustering approaches for particle trajectories as well as curve boxplots.",
"title": ""
},
{
"docid": "f8b0dcd771e7e7cf50a05cf7221f4535",
"text": "Studies on monocyte and macrophage biology and differentiation have revealed the pleiotropic activities of these cells. Macrophages are tissue sentinels that maintain tissue integrity by eliminating/repairing damaged cells and matrices. In this M2-like mode, they can also promote tumor growth. Conversely, M1-like macrophages are key effector cells for the elimination of pathogens, virally infected, and cancer cells. Macrophage differentiation from monocytes occurs in the tissue in concomitance with the acquisition of a functional phenotype that depends on microenvironmental signals, thereby accounting for the many and apparently opposed macrophage functions. Many questions arise. When monocytes differentiate into macrophages in a tissue (concomitantly adopting a specific functional program, M1 or M2), do they all die during the inflammatory reaction, or do some of them survive? Do those that survive become quiescent tissue macrophages, able to react as naïve cells to a new challenge? Or, do monocyte-derived tissue macrophages conserve a \"memory\" of their past inflammatory activation? This review will address some of these important questions under the general framework of the role of monocytes and macrophages in the initiation, development, resolution, and chronicization of inflammation.",
"title": ""
},
{
"docid": "a8f352abf1203132d69f3199b2b2a705",
"text": "BACKGROUND\nQualitative research explores complex phenomena encountered by clinicians, health care providers, policy makers and consumers. Although partial checklists are available, no consolidated reporting framework exists for any type of qualitative design.\n\n\nOBJECTIVE\nTo develop a checklist for explicit and comprehensive reporting of qualitative studies (in depth interviews and focus groups).\n\n\nMETHODS\nWe performed a comprehensive search in Cochrane and Campbell Protocols, Medline, CINAHL, systematic reviews of qualitative studies, author or reviewer guidelines of major medical journals and reference lists of relevant publications for existing checklists used to assess qualitative studies. Seventy-six items from 22 checklists were compiled into a comprehensive list. All items were grouped into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting. Duplicate items and those that were ambiguous, too broadly defined and impractical to assess were removed.\n\n\nRESULTS\nItems most frequently included in the checklists related to sampling method, setting for data collection, method of data collection, respondent validation of findings, method of recording data, description of the derivation of themes and inclusion of supporting quotations. We grouped all items into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting.\n\n\nCONCLUSIONS\nThe criteria included in COREQ, a 32-item checklist, can help researchers to report important aspects of the research team, study methods, context of the study, findings, analysis and interpretations.",
"title": ""
},
{
"docid": "792767dee5fb0251f0ff028c75d6e55a",
"text": "According to a recent theory, anterior cingulate cortex is sensitive to response conflict, the coactivation of mutually incompatible responses. The present research develops this theory to provide a new account of the error-related negativity (ERN), a scalp potential observed following errors. Connectionist simulations of response conflict in an attentional task demonstrated that the ERN--its timing and sensitivity to task parameters--can be explained in terms of the conflict theory. A new experiment confirmed predictions of this theory regarding the ERN and a second scalp potential, the N2, that is proposed to reflect conflict monitoring on correct response trials. Further analysis of the simulation data indicated that errors can be detected reliably on the basis of post-error conflict. It is concluded that the ERN can be explained in terms of response conflict and that monitoring for conflict may provide a simple mechanism for detecting errors.",
"title": ""
},
{
"docid": "3f418dd3a1374a7928e2428aefe4fe29",
"text": "The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach for tackling this problem is commonly known as pruning and it consists of training a larger than necessary network and then removing unnecessary weights/nodes. In this paper, a new pruning method is developed, based on the idea of iteratively eliminating units and adjusting the remaining weights in such a way that the network performance does not worsen over the entire training set. The pruning problem is formulated in terms of solving a system of linear equations, and a very efficient conjugate gradient algorithm is used for solving it, in the least-squares sense. The algorithm also provides a simple criterion for choosing the units to be removed, which has proved to work well in practice. The results obtained over various test problems demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "eb2db8f0eb72c1721a78b3a2abbacdef",
"text": "Deep neural networks (DNNs) are powerful types of artificial neural networks (ANNs) that use several hidden layers. They have recently gained considerable attention in the speech transcription and image recognition community (Krizhevsky et al., 2012) for their superior predictive properties including robustness to overfitting. However their application to algorithmic trading has not been previously researched, partly because of their computational complexity. This paper describes the application of DNNs to predicting financial market movement directions. In particular we describe the configuration and training approach and then demonstrate their application to backtesting a simple trading strategy over 43 different Commodity and FX future mid-prices at 5-minute intervals. All results in this paper are generated using a C++ implementation on the Intel Xeon Phi co-processor which is 11.4x faster than the serial version and a Python strategy backtesting environment both of which are available as open source code written by the authors.",
"title": ""
},
{
"docid": "fda10c187c97f5c167afaa0f84085953",
"text": "We provide empirical evidence that suggests social media and stock markets have a nonlinear causal relationship. We take advantage of an extensive data set composed of social media messages related to DJIA index components. By using information-theoretic measures to cope for possible nonlinear causal coupling between social media and stock markets systems, we point out stunning differences in the results with respect to linear coupling. Two main conclusions are drawn: First, social media significant causality on stocks’ returns are purely nonlinear in most cases; Second, social media dominates the directional coupling with stock market, an effect not observable within linear modeling. Results also serve as empirical guidance on model adequacy in the investigation of sociotechnical and financial systems.",
"title": ""
},
{
"docid": "fde78187088da4d4b8fe4cb0f959b860",
"text": "The key question raised in this research in progress paper is whether the development stage of a (hardware) startup can give an indication of the crowdfunding type it decides to choose. Throughout the paper, I empirically investigate the German crowdfunding landscape and link it to startups in the hardware sector, picking up the proposed notion of an emergent hardware renaissance. To identify the potential points of contact between crowdfunds and startups, an evaluation of different startup stage models with regard to funding requirements is provided, as is an overview of currently used crowdfunding typologies. The example of two crowdfunding platforms (donation and non-monetary reward crowdfunding vs. equity-based crowdfunding) and their respective hardware projects and startups is used to highlight the potential of this research in progress. 1 Introduction Originally motivated by Paul Graham's 'The Hardware Renaissance' (2012) and further spurred by Witheiler's 'The hardware revolution will be crowdfunded' (2013), I chose to consider the intersection of startups, crowdfunding, and hardware. This is particularly interesting since literature on innovation and startup funding has indeed grown to some sophistication regarding the timing of more classic sources of capital in a startup's life, such as bootstrapping, business angel funding, and venture capital (cf. e.g., Schwienbacher & Larralde, 2012; Metrick & Yasuda, 2011). Due to the novelty of crowdfunding, however, general research on this type of funding is just at the beginning stages and many papers are rather focused on specific elements of the phenomenon (e.g., Belleflamme et al., 2013; Agrawal et al. 2011) and / or exploratory in nature (e.g., Mollick, 2013). What is missing is a verification of the research on potential points of contact between crowdfunds and startups. It remains unclear when crowdfunding is used—primarily during the early seed stage for example or equally at some later point as well—and what types apply (cf. e.g., Collins & Pierrakis, 2012). Simply put, the research question that emerges is whether the development stage of a startup can give an indication of the crowdfunding type it decides to choose. To further explore an answer to this question, I commenced an investigation of the German crowdfunding scene with a focus on hardware startups. Following desk research on platforms situated in German-speaking areas—Germany, Austria, Switzerland—, a categorization of the respectively used funding types is still in process, and transitions into a quantitative analysis and an in-depth case study-based assessment. The prime challenge of such an investigation …",
"title": ""
},
{
"docid": "43d307f1e7aa43350399e7343946ac47",
"text": "Computer based medical decision support system (MDSS) can be useful for the physicians with its fast and accurate decision making process. Predicting the existence of heart disease accurately, results in saving life of patients followed by proper treatment. The main objective of our paper is to present a MDSS for heart disease classification based on sequential minimal optimization (SMO) technique in support vector machine (SVM). In this we illustrated the UCI machine learning repository data of Cleveland heart disease database; we trained SVM by using SMO technique. Training a SVM requires the solution of a very large QP optimization problem..SMO algorithm breaks this large optimization problem into small sub-problems. Both the training and testing phases give the accuracy on each record. The results proved that the MDSS is able to carry out heart disease diagnosis accurately in fast way and on a large dataset it shown good ability of prediction.",
"title": ""
}
] | scidocsrr |
108eb06bba679458650bcfb0ceedd835 | Making machine learning models interpretable | [
{
"docid": "be9cea5823779bf5ced592f108816554",
"text": "Undoubtedly, bioinformatics is one of the fastest developing scientific disciplines in recent years. Bioinformatics is the development and application of computer methods for management, analysis, interpretation, and prediction, as well as for the design of experiments. There is already a significant number of books on bioinformatics. Some are introductory and require almost no prior experience in biology or computer science: “Bioinformatics Basics Applications in Biological Science and Medicine” and “Introduction to Bioinformatics.” Others are targeted to biologists entering the field of bioinformatics: “Developing Bioinformatics Computer Skills.” Some more specialized books are: “An Introduction to Support Vector Machines : And Other Kernel-Based Learning Methods”, “Biological Sequence Analysis : Probabilistic Models of Proteins and Nucleic Acids”, “Pattern Discovery in Bimolecular Data : Tools, Techniques, and Applications”, “Computational Molecular Biology: An Algorithmic Approach.” The book subject of this review has a broad scope. “Bioinformatics: The machine learning approach” is aimed at two types of researchers and students. First are the biologists and biochemists who need to understand new data-driven algorithms, such as neural networks and hidden Markov",
"title": ""
}
] | [
{
"docid": "e755e96c2014100a69e4a962d6f75fb5",
"text": "We propose a material acquisition approach to recover the spatially-varying BRDF and normal map of a near-planar surface from a single image captured by a handheld mobile phone camera. Our method images the surface under arbitrary environment lighting with the flash turned on, thereby avoiding shadows while simultaneously capturing highfrequency specular highlights. We train a CNN to regress an SVBRDF and surface normals from this image. Our network is trained using a large-scale SVBRDF dataset and designed to incorporate physical insights for material estimation, including an in-network rendering layer to model appearance and a material classifier to provide additional supervision during training. We refine the results from the network using a dense CRF module whose terms are designed specifically for our task. The framework is trained end-to-end and produces high quality results for a variety of materials. We provide extensive ablation studies to evaluate our network on both synthetic and real data, while demonstrating significant improvements in comparisons with prior works.",
"title": ""
},
{
"docid": "559a4175347e5fea57911d9b8c5080e6",
"text": "Online social networks offering various services have become ubiquitous in our daily life. Meanwhile, users nowadays are usually involved in multiple online social networks simultaneously to enjoy specific services provided by different networks. Formally, social networks that share some common users are named as partially aligned networks. In this paper, we want to predict the formation of social links in multiple partially aligned social networks at the same time, which is formally defined as the multi-network link (formation) prediction problem. In multiple partially aligned social networks, users can be extensively correlated with each other by various connections. To categorize these diverse connections among users, 7 \"intra-network social meta paths\" and 4 categories of \"inter-network social meta paths\" are proposed in this paper. These \"social meta paths\" can cover a wide variety of connection information in the network, some of which can be helpful for solving the multi-network link prediction problem but some can be not. To utilize useful connection, a subset of the most informative \"social meta paths\" are picked, the process of which is formally defined as \"social meta path selection\" in this paper. An effective general link formation prediction framework, Mli (Multi-network Link Identifier), is proposed in this paper to solve the multi-network link (formation) prediction problem. Built with heterogenous topological features extracted based on the selected \"social meta paths\" in the multiple partially aligned social networks, Mli can help refine and disambiguate the prediction results reciprocally in all aligned networks. Extensive experiments conducted on real-world partially aligned heterogeneous networks, Foursquare and Twitter, demonstrate that Mli can solve the multi-network link prediction problem very well.",
"title": ""
},
{
"docid": "17c49edf5842fb918a3bd4310d910988",
"text": "In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the image boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.",
"title": ""
},
{
"docid": "c495fadfd4c3e17948e71591e84c3398",
"text": "A real-time, digital algorithm for pulse width modulation (PWM) with distortion-free baseband is developed in this paper. The algorithm not only eliminates the intrinsic baseband distortion of digital PWM but also avoids the appearance of side-band components of the carrier in the baseband even for low switching frequencies. Previous attempts to implement digital PWM with these spectral properties required several processors due to their complexity; the proposed algorithm uses only several FIR filters and a few multiplications and additions and therefore is implemented in real time on a standard DSP. The performance of the algorithm is compared with that of uniform, double-edge PWM modulator via experimental measurements for several bandlimited modulating signals.",
"title": ""
},
{
"docid": "93f0026a850a620ecabafdbfec3abb72",
"text": "Knet (pronounced \"kay-net\") is the Koç University machine learning framework implemented in Julia, a high-level, high-performance, dynamic programming language. Unlike gradient generating compilers like Theano and TensorFlow which restrict users into a modeling mini-language, Knet allows models to be defined by just describing their forward computation in plain Julia, allowing the use of loops, conditionals, recursion, closures, tuples, dictionaries, array indexing, concatenation and other high level language features. High performance is achieved by combining automatic differentiation of most of Julia with efficient GPU kernels and memory management. Several examples and benchmarks are provided to demonstrate that GPU support and automatic differentiation of a high level language are sufficient for concise definition and efficient training of sophisticated models.",
"title": ""
},
{
"docid": "46df05f01a027359f23d4de2396e2586",
"text": "Dialog act identification plays an important role in understanding conversations. It has been widely applied in many fields such as dialogue systems, automatic machine translation, automatic speech recognition, and especially useful in systems with human-computer natural language dialogue interfaces such as virtual assistants and chatbots. The first step of identifying dialog act is identifying the boundary of the dialog act in utterances. In this paper, we focus on segmenting the utterance according to the dialog act boundaries, i.e. functional segments identification, for Vietnamese utterances. We investigate carefully functional segment identification in two approaches: (1) machine learning approach using maximum entropy (ME) and conditional random fields (CRFs); (2) deep learning approach using bidirectional Long Short-Term Memory (LSTM) with a CRF layer (Bi-LSTM-CRF) on two different conversational datasets: (1) Facebook messages (Message data); (2) transcription from phone conversations (Phone data). To the best of our knowledge, this is the first work that applies deep learning based approach to dialog act segmentation. As the results show, deep learning approach performs appreciably better as to compare with traditional machine learning approaches. Moreover, it is also the first study that tackles dialog act and functional segment identification for Vietnamese.",
"title": ""
},
{
"docid": "f66c9aa537630fdbff62d8d49205123b",
"text": "This workshop will explore community based repositories for educational data and analytic tools that are used to connect researchers and reduce the barriers to data sharing. Leading innovators in the field, as well as attendees, will identify and report on bottlenecks that remain toward our goal of a unified repository. We will discuss these as well as possible solutions. We will present LearnSphere, an NSF funded system that supports collaborating on and sharing a wide variety of educational data, learning analytics methods, and visualizations while maintaining confidentiality. We will then have hands-on sessions in which attendees have the opportunity to apply existing learning analytics workflows to their choice of educational datasets in the repository (using a simple drag-and-drop interface), add their own learning analytics workflows (requires very basic coding experience), or both. Leaders and attendees will then jointly discuss the unique benefits as well as the limitations of these solutions. Our goal is to create building blocks to allow researchers to integrate their data and analysis methods with others, in order to advance the future of learning science.",
"title": ""
},
{
"docid": "506a6a98e87fb5a6dc7e5cbe9cf27262",
"text": "Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex innerand cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large innerand cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process. Source (GTA5) Target (BDD) Figure 1: Exemplar guided image translation examples of GTA5→ BDD. Best viewed in color.",
"title": ""
},
{
"docid": "46829dde25c66191bcefae3614c2dd3f",
"text": "User-generated content (UGC) on the Web, especially on social media platforms, facilitates the association of additional information with digital resources; thus, it can provide valuable supplementary content. However, UGC varies in quality and, consequently, raises the challenge of how to maximize its utility for a variety of end-users. This study aims to provide researchers and Web data curators with comprehensive answers to the following questions: What are the existing approaches and methods for assessing and ranking UGC? What features and metrics have been used successfully to assess and predict UGC value across a range of application domains? What methods can be effectively employed to maximize that value? This survey is composed of a systematic review of approaches for assessing and ranking UGC: results are obtained by identifying and comparing methodologies within the context of short text-based UGC on the Web. Existing assessment and ranking approaches adopt one of four framework types: the community-based framework takes into consideration the value assigned to content by a crowd of humans, the end-user--based framework adapts and personalizes the assessment and ranking process with respect to a single end-user, the designer-based framework encodes the software designer’s values in the assessment and ranking method, and the hybrid framework employs methods from more than one of these types. This survey suggests a need for further experimentation and encourages the development of new approaches for the assessment and ranking of UGC.",
"title": ""
},
{
"docid": "6c3f80b453d51e364eca52656ed54e62",
"text": "Despite substantial recent research activity related to continuous delivery and deployment (CD), there has not yet been a systematic, empirical study on how the practices often associated with continuous deployment have found their way into the broader software industry. This raises the question to what extent our knowledge of the area is dominated by the peculiarities of a small number of industrial leaders, such as Facebook. To address this issue, we conducted a mixed-method empirical study, consisting of a pre-study on literature, qualitative interviews with 20 software developers or release engineers with heterogeneous backgrounds, and a Web-based quantitative survey that attracted 187 complete responses. A major trend in the results of our study is that architectural issues are currently one of the main barriers for CD adoption. Further, feature toggles as an implementation technique for partial rollouts lead to unwanted complexity, and require research on better abstractions and modelling techniques for runtime variability. Finally, we conclude that practitioners are in need for more principled approaches to release decision making, e.g., which features to conduct A/B tests on, or which metrics to evaluate.",
"title": ""
},
{
"docid": "52212ff3e1c85b5f5c3fcf0ec71f6f8b",
"text": "Embodied cognition theory proposes that individuals' abstract concepts can be associated with sensorimotor processes. The authors examined the effects of teaching participants novel embodied metaphors, not based in prior physical experience, and found evidence suggesting that they lead to embodied simulation, suggesting refinements to current models of embodied cognition. Creating novel embodiments of abstract concepts in the laboratory may be a useful method for examining mechanisms of embodied cognition.",
"title": ""
},
{
"docid": "712cd41c525b6632a7a5c424173d6f1e",
"text": "The use of 3-D multicellular spheroid (MCS) models is increasingly being accepted as a viable means to study cell-cell, cell-matrix and cell-drug interactions. Behavioral differences between traditional monolayer (2-D) cell cultures and more recent 3-D MCS confirm that 3-D MCS more closely model the in vivo environment. However, analyzing the effect of pharmaceutical agents on both monolayer cultures and MCS is very time intensive. This paper reviews the use of electrical impedance spectroscopy (EIS), a label-free whole cell assay technique, as a tool for automated screening of cell drug interactions in MCS models for biologically/physiologically relevant events over long periods of time. EIS calculates the impedance of a sample by applying an AC current through a range of frequencies and measuring the resulting voltage. This review will introduce techniques used in impedance-based analysis of 2-D systems; highlight recently developed impedance-based techniques for analyzing 3-D cell cultures; and discuss applications of 3-D culture impedance monitoring systems.",
"title": ""
},
{
"docid": "cc92787280db22c46a159d95f6990473",
"text": "A novel formulation for the voltage waveforms in high efficiency linear power amplifiers is described. This formulation demonstrates that a constant optimum efficiency and output power can be obtained over a continuum of solutions by utilizing appropriate harmonic reactive impedance terminations. A specific example is confirmed experimentally. This new formulation has some important implications for the possibility of realizing broadband >10% high efficiency linear RF power amplifiers.",
"title": ""
},
{
"docid": "ef26995e3979f479f4c3628283816d5d",
"text": "This article addresses the position taken by Clark (1983) that media do not influence learning under any conditions. The article reframes the questions raised by Clark to explore the conditions under which media will influence learning. Specifically, it posits the need to consider the capabilities of media, and the methods that employ them, as they interact with the cognitive and social processes by which knowledge is constructed. This approach is examined within the context of two major media-based projects, one which uses computers and the other,video. The article discusses the implications of this approach for media theory, research and practice.",
"title": ""
},
{
"docid": "55a0fb2814fde7890724a137fc414c88",
"text": "Quantitative structure-activity relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists toward collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making.",
"title": ""
},
{
"docid": "00223ccf5b5aebfc23c76afb7192e3f7",
"text": "Computer Security System / technology have passed through several changes. The trends have been from what you know (e.g. password, PIN, etc) to what you have (ATM card, Driving License, etc) and presently to who you are (Biometry) or combinations of two or more of the trios. This technology (biometry) has come to solve the problems identified with knowledge-based and token-based authentication systems. It is possible to forget your password and what you have can as well be stolen. The security of determining who you are is referred to as BIOMETRIC. Biometric, in a nutshell, is the use of your body as password. This paper explores the various methods of biometric identification that have evolved over the years and the features used for each modality.",
"title": ""
},
{
"docid": "a7618e1370db3fca4262f8d36979aa91",
"text": "Generative Adversarial Network (GAN) has been shown to possess the capability to learn distributions of data, given infinite capacity of models [1, 2]. Empirically, approximations with deep neural networks seem to have “sufficiently large” capacity and lead to several success in many applications, such as image generation. However, most of the results are difficult to evaluate because of the curse of dimensionality and the unknown distribution of the data. To evaluate GANs, in this paper, we consider simple one-dimensional data coming from parametric distributions circumventing the aforementioned problems. We formulate rigorous techniques for evaluation under this setting. Based on this evaluation, we find that many state-ofthe-art GANs are very difficult to train to learn the true distribution and can usually only find some of the modes. If the GAN has learned, such as MMD GAN, we observe it has some generalization capabilities.",
"title": ""
},
{
"docid": "82865170278997209a650aa8be483703",
"text": "This paper presents a novel dataset for traffic accidents analysis. Our goal is to resolve the lack of public data for research about automatic spatio-temporal annotations for traffic safety in the roads. Through the analysis of the proposed dataset, we observed a significant degradation of object detection in pedestrian category in our dataset, due to the object sizes and complexity of the scenes. To this end, we propose to integrate contextual information into conventional Faster R-CNN using Context Mining (CM) and Augmented Context Mining (ACM) to complement the accuracy for small pedestrian detection. Our experiments indicate a considerable improvement in object detection accuracy: +8.51% for CM and +6.20% for ACM. Finally, we demonstrate the performance of accident forecasting in our dataset using Faster R-CNN and an Accident LSTM architecture. We achieved an average of 1.684 seconds in terms of Time-To-Accident measure with an Average Precision of 47.25%. Our Webpage for the paper is https:",
"title": ""
},
{
"docid": "1c8ac344f85ff4d4a711536841168b6a",
"text": "Internet Protocol Television (IPTV) is an increasingly popular multimedia service which is used to deliver television, video, audio and other interactive content over proprietary IP-based networks. Video on Demand (VoD) is one of the most popular IPTV services, and is very important for IPTV providers since it represents the second most important revenue stream after monthly subscriptions. In addition to high-quality VoD content, profitable VoD service provisioning requires an enhanced content accessibility to greatly improve end-user experience. Moreover, it is imperative to offer innovative features to attract new customers and retain existing ones. To achieve this goal, IPTV systems typically employ VoD recommendation engines to offer personalized lists of VoD items that are potentially interesting to a user from a large amount of available titles. In practice, a good recommendation engine does not offer popular and well-known titles, but is rather able to identify interesting among less popular items which would otherwise be hard to find. In this paper we report our experience in building a VoD recommendation system. The presented evaluation shows that our recommendation system is able to recommend less popular items while operating under a high load of end-user requests.",
"title": ""
},
{
"docid": "97065954a10665dee95977168b9e6c60",
"text": "We describe the current status of Pad++, a zooming graphical interface that we are exploring as an alternative to traditional window and icon-based approaches to interface design. We discuss the motivation for Pad++, describe the implementation, and present prototype applications. In addition, we introduce an informational physics strategy for interface design and briefly compare it with metaphor-based design strategies.",
"title": ""
}
] | scidocsrr |
dd70d11322804629451fd718532a9dd4 | Walking in Facebook: A Case Study of Unbiased Sampling of OSNs | [
{
"docid": "bfe762fc6e174778458b005be75d8285",
"text": "The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed istribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a randomeffects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.",
"title": ""
}
] | [
{
"docid": "b5ab4c11feee31195fdbec034b4c99d9",
"text": "Abstract Traditionally, firewalls and access control have been the most important components used in order to secure servers, hosts and computer networks. Today, intrusion detection systems (IDSs) are gaining attention and the usage of these systems is increasing. This thesis covers commercial IDSs and the future direction of these systems. A model and taxonomy for IDSs and the technologies behind intrusion detection is presented. Today, many problems exist that cripple the usage of intrusion detection systems. The decreasing confidence in the alerts generated by IDSs is directly related to serious problems like false positives. By studying IDS technologies and analyzing interviews conducted with security departments at Swedish banks, this thesis identifies the major problems within IDSs today. The identified problems, together with recent IDS research reports published at the RAID 2002 symposium, are used to recommend the future direction of commercial intrusion detection systems. Intrusion Detection Systems – Technologies, Weaknesses and Trends",
"title": ""
},
{
"docid": "963d6b615ffd025723c82c1aabdbb9c6",
"text": "A single high-directivity microstrip patch antenna (MPA) having a rectangular profile, which can substitute a linear array is proposed. It is designed by using genetic algorithms with the advantage of not requiring a feeding network. The patch fits inside an area of 2.54 x 0.25, resulting in a broadside pattern with a directivity of 12 dBi and a fractional impedance bandwidth of 4 %. The antenna is fabricated and the measurements are in good agreement with the simulated results. The genetic MPA provides a similar directivity as linear arrays using a corporate or series feeding, with the advantage that the genetic MPA results in more bandwidth.",
"title": ""
},
{
"docid": "863e71cf1c1eddf3c6ceac400670e6f7",
"text": "This paper provides a brief overview to four major types of causal models for health-sciences research: Graphical models (causal diagrams), potential-outcome (counterfactual) models, sufficient-component cause models, and structural-equations models. The paper focuses on the logical connections among the different types of models and on the different strengths of each approach. Graphical models can illustrate qualitative population assumptions and sources of bias not easily seen with other approaches; sufficient-component cause models can illustrate specific hypotheses about mechanisms of action; and potential-outcome and structural-equations models provide a basis for quantitative analysis of effects. The different approaches provide complementary perspectives, and can be employed together to improve causal interpretations of conventional statistical results.",
"title": ""
},
{
"docid": "52360aadfd831017ab70b4d234a57bc4",
"text": "Emerging big data analytics applications require a significant amount of server computational power. The costs of building and running a computing server to process big data and the capacity to which we can scale it are driven in large part by those computational resources. However, big data applications share many characteristics that are fundamentally different from traditional desktop, parallel, and scale-out applications. Big data analytics applications rely heavily on specific deep machine learning and data mining algorithms, and are running a complex and deep software stack with various components (e.g. Hadoop, Spark, MPI, Hbase, Impala, MySQL, Hive, Shark, Apache, and MangoDB) that are bound together with a runtime software system and interact significantly with I/O and OS, exhibiting high computational intensity, memory intensity, I/O intensity and control intensity. Current server designs, based on commodity homogeneous processors, will not be the most efficient in terms of performance/watt for this emerging class of applications. In other domains, heterogeneous architectures have emerged as a promising solution to enhance energy-efficiency by allowing each application to run on a core that matches resource needs more closely than a one-size-fits-all core. A heterogeneous architecture integrates cores with various micro-architectures and accelerators to provide more opportunity for efficient workload mapping. In this work, through methodical investigation of power and performance measurements, and comprehensive system level characterization, we demonstrate that a heterogeneous architecture combining high performance big and low power little cores is required for efficient big data analytics applications processing, and in particular in the presence of accelerators and near real-time performance constraints.",
"title": ""
},
{
"docid": "342cf76dd8b12195829aa33230bf5751",
"text": "Support Vector Machines (SVMs) have been very successful in text classification. However, the intrinsic geometric structure of text data has been ignored by standard kernels commonly used in SVMs. It is natural to assume that the documents are on the multinomial manifold, which is the simplex of multinomial models furnished with the Riemannian structure induced by the Fisher information metric. We prove that the Negative Geodesic Distance (NGD) on the multinomial manifold is conditionally positive definite (cpd), thus can be used as a kernel in SVMs. Experiments show the NGD kernel on the multinomial manifold to be effective for text classification, significantly outperforming standard kernels on the ambient Euclidean space.",
"title": ""
},
{
"docid": "734825ba0795a214c0cdf4c668ac7967",
"text": "Advances in microbial methods have demonstrated that microorganisms globally are the dominating organisms both concerning biomass and diversity. Their functional and genetic potential may exceed that of higher organisms. Studies of bacterial diversity have been hampered by their dependence on phenotypic characterization of bacterial isolates. Molecular techniques have provided the tools for analyzing the entire bacterial community including those which we are not able to grow in the laboratory. Reassociation analysis of DNA isolated directly from the bacteria in pristine soil and marine sediment samples revealed that such environments contained in the order of 10 000 bacterial types. The diversity of the total bacterial community was approximately 170 times higher than the diversity of the collection of bacterial isolates from the same soil. The culturing conditions therefore select for a small and probably skewed fraction of the organisms present in the environment. Environmental stress and agricultural management reduce the bacterial diversity. With the reassociation technique it was demonstrated that in heavily polluted fish farm sediments the diversity was reduced by a factor of 200 as compared to pristine sediments. Here we discuss some molecular mechanisms and environmental factors controlling the bacterial diversity in soil and sediments.",
"title": ""
},
{
"docid": "d107bb7ee16b24206f468aee2d0a47e4",
"text": "This paper presents a novel gradient correlation similarity (Gcs) measure-based decolorization model for faithfully preserving the appearance of the original color image. Contrary to the conventional data-fidelity term consisting of gradient error-norm-based measures, the newly defined Gcs measure calculates the summation of the gradient correlation between each channel of the color image and the transformed grayscale image. Two efficient algorithms are developed to solve the proposed model. On one hand, due to the highly nonlinear nature of Gcs measure, a solver consisting of the augmented Lagrangian and alternating direction method is adopted to deal with its approximated linear parametric model. The presented algorithm exhibits excellent iterative convergence and attains superior performance. On the other hand, a discrete searching solver is proposed by determining the solution with the minimum function value from the linear parametric model-induced candidate images. The non-iterative solver has advantages in simplicity and speed with only several simple arithmetic operations, leading to real-time computational speed. In addition, it is very robust with respect to the parameter and candidates. Extensive experiments under a variety of test images and a comprehensive evaluation against existing state-of-the-art methods consistently demonstrate the potential of the proposed model and algorithms.",
"title": ""
},
{
"docid": "e89123df2d60f011a3c6057030c42167",
"text": "Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50% of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets.",
"title": ""
},
{
"docid": "c744354fcc6115a83c916dcc71b381f4",
"text": "The spread of false rumours during emergencies can jeopardise the well-being of citizens as they are monitoring the stream of news from social media to stay abreast of the latest updates. In this paper, we describe the methodology we have developed within the PHEME project for the collection and sampling of conversational threads, as well as the tool we have developed to facilitate the annotation of these threads so as to identify rumourous ones. We describe the annotation task conducted on threads collected during the 2014 Ferguson unrest and we present and analyse our findings. Our results show that we can collect effectively social media rumours and identify multiple rumours associated with a range of stories that would have been hard to identify by relying on existing techniques that need manual input of rumour-specific keywords.",
"title": ""
},
{
"docid": "acacc206bd12bf787026c1cc0ff41ab9",
"text": "This paper presents a fruit size detecting and grading system based on image processing. After capturing the fruit side view image, some fruit characters is extracted by using detecting algorithms. According to these characters, grading is realized. Experiments show that this embedded grading system has the advantage of high accuracy of grading, high speed and low cost. It will have a good prospect of application in fruit quality detecting and grading areas.",
"title": ""
},
{
"docid": "84aacf4b56891e70063e438b0dc35040",
"text": "The increasing availability and maturity of both scalable computing architectures and deep syntactic parsers is opening up new possibilities for Relation Extraction (RE) on large corpora of natural language text. In this paper, we present FREEPAL, a resource designed to assist with the creation of relation extractors for more than 5,000 relations defined in the FREEBASE knowledge base (KB). The resource consists of over 10 million distinct lexico-syntactic patterns extracted from dependency trees, each of which is assigned to one or more FREEBASE relations with different confidence strengths. We generate the resource by executing a large-scale distant supervision approach on the CLUEWEB09 corpus to extract and parse over 260 million sentences labeled with FREEBASE entities and relations. We make FREEPAL freely available to the research community, and present a web demonstrator to the dataset, accessible from free-pal.appspot.com.",
"title": ""
},
{
"docid": "e2606242fcc89bfcf5c9c4cd71dd2c18",
"text": "This letter introduces the class of generalized punctured convolutional codes (GPCCs), which is broader than and encompasses the class of the standard punctured convolutional codes (PCCs). A code in this class can be represented by a trellis module, the GPCC trellis module, whose topology resembles that of the minimal trellis module. he GPCC trellis module for a PCC is isomorphic to the minimal trellis module. A list containing GPCCs with better distance spectrum than the best known PCCs with same code rate and trellis complexity is presented.",
"title": ""
},
{
"docid": "99ffaa3f845db7b71a6d1cbc62894861",
"text": "There is a huge amount of historical documents in libraries and in various National Archives that have not been exploited electronically. Although automatic reading of complete pages remains, in most cases, a long-term objective, tasks such as word spotting, text/image alignment, authentication and extraction of specific fields are in use today. For all these tasks, a major step is document segmentation into text lines. Because of the low quality and the complexity of these documents (background noise, artifacts due to aging, interfering lines), automatic text line segmentation remains an open research field. The objective of this paper is to present a survey of existing methods, developed during the last decade and dedicated to documents of historical interest.",
"title": ""
},
{
"docid": "597a3b52fd5114228d74398756d3359f",
"text": "The authors report a meta-analysis of individual differences in detecting deception, confining attention to occasions when people judge strangers' veracity in real-time with no special aids. The authors have developed a statistical technique to correct nominal individual differences for differences introduced by random measurement error. Although researchers have suggested that people differ in the ability to detect lies, psychometric analyses of 247 samples reveal that these ability differences are minute. In terms of the percentage of lies detected, measurement-corrected standard deviations in judge ability are less than 1%. In accuracy, judges range no more widely than would be expected by chance, and the best judges are no more accurate than a stochastic mechanism would produce. When judging deception, people differ less in ability than in the inclination to regard others' statements as truthful. People also differ from one another as lie- and truth-tellers. They vary in the detectability of their lies. Moreover, some people are more credible than others whether lying or truth-telling. Results reveal that the outcome of a deception judgment depends more on the liar's credibility than any other individual difference.",
"title": ""
},
{
"docid": "a1ca1d15416ba6d6b2042e2a5d4597de",
"text": "Edge detection is a problem of fundamental importance in image analysis. Many approaches for edge detection have already revealed more are waiting to be. But edge detection using K-means algorithm is the most heuristic and unique approach. In this paper, we have proposed an algorithmic technique to detect the edge of any kind of true gray scale images considering the artificial features of the image as the feature set which is fed to K-Means algorithm for clustering the image and there to detect clearly the edges of the objects present in the considered image. The artificial features, which we have considered here, are mean, standard deviation, entropy and busyness of pixel intensity values.",
"title": ""
},
{
"docid": "296aa8ddf2985bcaaea56b2c8484557a",
"text": "Inflammatory damage in many neurodegenerative diseases is restricted to certain regions of the CNS, and while microglia have long been implicated in the pathology of many of these disorders, information comparing their gene expression in different CNS regions is lacking. Here we tested the hypothesis that the expression of purinergic receptors, estrogen receptors and other neuroprotective and pro-inflammatory genes differed among CNS regions in healthy mice. Because neurodegenerative diseases vary in incidence by sex and age, we also examined the regional distribution of these genes in male and female mice of four different ages between 21 days and 12 months. We postulated that pro-inflammatory gene expression would be higher in older animals, and lower in young adult females. We found that microglial gene expression differed across the CNS. Estrogen receptor alpha (Esr1) mRNA levels were often lower in microglia from the brainstem/spinal cord than from the cortex, whereas tumor necrosis factor alpha (Tnfα) expression was several times higher. In addition, the regional pattern of gene expression often changed with animal age; for example, no regional differences in P2X7 mRNA levels were detected in 21 day-old animals, but at 7 weeks and older, expression was highest in cerebellar microglia. Lastly, the expression of some genes was sexually dimorphic. In microglia from 12 month-old animals, mRNA levels of inducible nitric oxide synthase, but not Tnfα, were higher in females than males. These data suggest that microglial gene expression is not uniformly more pro-inflammatory in males or older animals. Moreover, microglia from CNS regions in which neuronal damage predominates in neurodegenerative disease do not generally express more pro-inflammatory genes than microglia from regions less frequently affected. This study provides an in-depth assessment of regional-, sex- and age-dependent differences in key microglial transcripts from the healthy mouse CNS.",
"title": ""
},
{
"docid": "468cdc4decf3871314ce04d6e49f6fad",
"text": "Documents come naturally with structure: a section contains paragraphs which itself contains sentences; a blog page contains a sequence of comments and links to related blogs. Structure, of course, implies something about shared topics. In this paper we take the simplest form of structure, a document consisting of multiple segments, as the basis for a new form of topic model. To make this computationally feasible, and to allow the form of collapsed Gibbs sampling that has worked well to date with topic models, we use the marginalized posterior of a two-parameter Poisson-Dirichlet process (or Pitman-Yor process) to handle the hierarchical modelling. Experiments using either paragraphs or sentences as segments show the method significantly outperforms standard topic models on either whole document or segment, and previous segmented models, based on the held-out perplexity measure.",
"title": ""
},
{
"docid": "3eb76b15fa11704c0a6f3fc64f880aa8",
"text": "The emergence of environmental problems and the increased awareness towards green purchase behaviour have received many responses by the stakeholders’ worldwide like from the government bodies, researchers, businesses, consumers and so on. Government’s bodies, for example, have responded by developing and introducing their own environmentally-linked policies to be implemented in their countries, which are intended to conserve and preserve the environment. Researchers, on the other hand, are continuously conducting extensive studies and publishing their findings on the issues to inform the public, while businesses that promote the selling of green products (or environmentally friendly products) in the marketplace have been increasing in number. Segments of green consumers have been observed to emerge and grow in size worldwide including Malaysia. This may be due to the increased number of green products introduced to consumers in the marketplace. Moreover, scholars from Malaysia also argued that, this trend is experiencing tremendous growth. Although there are responses from these stakeholders, especially consumers, who have had a positive impact on the environment, the trend of the green purchase behaviour by Malaysian consumers remains unobserved. Therefore, the authors aim to answer the questions concerning whether a trend can be observed in the green purchase behaviour of Malaysian consumers. The ability to observe the green purchase behaviour trend is useful, particularly for marketers and businesses that are selling or intending to sell green products within the country.",
"title": ""
},
{
"docid": "5b131fbca259f07bd1d84d4f61761903",
"text": "We aimed to identify a blood flow restriction (BFR) endurance exercise protocol that would both maximize cardiopulmonary and metabolic strain, and minimize the perception of effort. Twelve healthy males (23 ± 2 years, 75 ± 7 kg) performed five different exercise protocols in randomized order: HI, high-intensity exercise starting at 105% of the incremental peak power (P peak); I-BFR30, intermittent BFR at 30% P peak; C-BFR30, continuous BFR at 30% P peak; CON30, control exercise without BFR at 30% P peak; I-BFR0, intermittent BFR during unloaded exercise. Cardiopulmonary, gastrocnemius oxygenation (StO2), capillary lactate ([La]), and perceived exertion (RPE) were measured. V̇O2, ventilation (V̇ E), heart rate (HR), [La] and RPE were greater in HI than all other protocols. However, muscle StO2 was not different between HI (set1—57.8 ± 5.8; set2—58.1 ± 7.2%) and I-BRF30 (set1—59.4 ± 4.1; set2—60.5 ± 6.6%, p < 0.05). While physiologic responses were mostly similar between I-BFR30 and C-BFR30, [La] was greater in I-BFR30 (4.2 ± 1.1 vs. 2.6 ± 1.1 mmol L−1, p = 0.014) and RPE was less (5.6 ± 2.1 and 7.4 ± 2.6; p = 0.014). I-BFR30 showed similar reduced muscle StO2 compared with HI, and increased blood lactate compared to C-BFR30 exercise. Therefore, this study demonstrate that endurance cycling with intermittent BFR promotes muscle deoxygenation and metabolic strain, which may translate into increased endurance training adaptations while minimizing power output and RPE.",
"title": ""
}
] | scidocsrr |
e69e872948f131f16acf40c2288c7b81 | Food Hardships and Child Behavior Problems among Low-income Children Food Hardships and Child Behavior Problems among Low-income Children | [
{
"docid": "e91f0323df84e4c79e26822a799d54fd",
"text": "Researchers have renewed an interest in the harmful consequences of poverty on child development. This study builds on this work by focusing on one mechanism that links material hardship to child outcomes, namely the mediating effect of maternal depression. Using data from the National Maternal and Infant Health Survey, we found that maternal depression and poverty jeopardized the development of very young boys and girls, and to a certain extent, affluence buffered the deleterious consequences of depression. Results also showed that chronic maternal depression had severe implications for both boys and girls, whereas persistent poverty had a strong effect for the development of girls. The measures of poverty and maternal depression used in this study generally had a greater impact on measures of cognitive development than motor development.",
"title": ""
}
] | [
{
"docid": "f2a677515866e995ff8e0e90561d7cbc",
"text": "Pattern matching and data abstraction are important concepts in designing programs, but they do not fit well together. Pattern matching depends on making public a free data type representation, while data abstraction depends on hiding the representation. This paper proposes the views mechanism as a means of reconciling this conflict. A view allows any type to be viewed as a free data type, thus combining the clarity of pattern matching with the efficiency of data abstraction.",
"title": ""
},
{
"docid": "ddb70e707b63b30ee8e3b98b43db12a0",
"text": "Taint-style vulnerabilities are a persistent problem in software development, as the recently discovered \"Heart bleed\" vulnerability strikingly illustrates. In this class of vulnerabilities, attacker-controlled data is passed unsanitized from an input source to a sensitive sink. While simple instances of this vulnerability class can be detected automatically, more subtle defects involving data flow across several functions or project-specific APIs are mainly discovered by manual auditing. Different techniques have been proposed to accelerate this process by searching for typical patterns of vulnerable code. However, all of these approaches require a security expert to manually model and specify appropriate patterns in practice. In this paper, we propose a method for automatically inferring search patterns for taint-style vulnerabilities in C code. Given a security-sensitive sink, such as a memory function, our method automatically identifies corresponding source-sink systems and constructs patterns that model the data flow and sanitization in these systems. The inferred patterns are expressed as traversals in a code property graph and enable efficiently searching for unsanitized data flows -- across several functions as well as with project-specific APIs. We demonstrate the efficacy of this approach in different experiments with 5 open-source projects. The inferred search patterns reduce the amount of code to inspect for finding known vulnerabilities by 94.9% and also enable us to uncover 8 previously unknown vulnerabilities.",
"title": ""
},
{
"docid": "78fafa0e14685d317ab88361d0a0dc8c",
"text": "Industry analysts expect volume production of integrated circuits on 300-mm wafers to start in 2001 or 2002. At that time, appropriate production equipment must be available. To meet this need, the MEDEA Project has supported us at ASM Europe in developing an advanced vertical batch furnace system for 300-mm wafers. Vertical furnaces are widely used for many steps in the production of integrated circuits. In volume production, these batch furnaces achieve a lower cost per production step than single-wafer processing methods. Applications for vertical furnaces are extensive, including the processing of low-pressure chemical vapor deposition (LPCVD) layers such as deposited oxides, polysilicon, and nitride. Furthermore, the furnaces can be used for oxidation and annealing treatments. As the complexity of IC technology increases, production equipment must meet the technology guidelines summarized in Table 1 from the Semiconductor Industry Association’s Roadmap. The table shows that the minimal feature size will sharply decrease, and likewise the particle size and level will decrease. The challenge in designing a new generation of furnaces for 300-mm wafers was to improve productivity as measured in throughput (number of wafers processed per hour), clean-room footprint, and capital cost. Therefore, we created a completely new design rather than simply upscaling the existing 200mm equipment.",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "459b07b78f3cbdcbd673881fd000da14",
"text": "The intersubject dependencies of false nonmatch rates were investigated for a minutiae-based biometric authentication process using single enrollment and verification measurements. A large number of genuine comparison scores were subjected to statistical inference tests that indicated that the number of false nonmatches depends on the subject and finger under test. This result was also observed if subjects associated with failures to enroll were excluded from the test set. The majority of the population (about 90%) showed a false nonmatch rate that was considerably smaller than the average false nonmatch rate of the complete population. The remaining 10% could be characterized as “goats” due to their relatively high probability for a false nonmatch. The image quality reported by the template extraction module only weakly correlated with the genuine comparison scores. When multiple verification attempts were investigated, only a limited benefit was observed for “goats,” since the conditional probability for a false nonmatch given earlier nonsuccessful attempts increased with the number of attempts. These observations suggest that (1) there is a need for improved identification of “goats” during enrollment (e.g., using dedicated signal-driven analysis and classification methods and/or the use of multiple enrollment images) and (2) there should be alternative means for identity verification in the biometric system under test in case of two subsequent false nonmatches.",
"title": ""
},
{
"docid": "ad3147f3a633ec8612dc25dfde4a4f0c",
"text": "A half-bridge integrated zero-voltage-switching (ZVS) full-bridge converter with reduced conduction loss for battery on-board chargers in electric vehicles (EVs) or plug-in hybrid electric vehicles (PHEVs) is proposed in this paper. The proposed converter features a reduction in primary-conduction loss and a lower secondary-voltage stress. In addition, the proposed converter has the most favorable characteristics as battery chargers as follows: a full ZVS capability and a significantly reduced output filter size due to the improved output waveform. In this paper, the circuit configuration, operation principle, and relevant analysis results of the proposed converter are described, followed by the experimental results on a prototype converter realized with a scale-downed 2-kW battery charger for EVs or PHEVs. The experimental results validate the theoretical analysis and show the effectiveness of the proposed converter as battery on-board chargers for EVs or PHEVs.",
"title": ""
},
{
"docid": "9304c82e4b19c2f5e23ca45e7f2c9538",
"text": "Previous work has shown that using the GPU as a brute force method for SELECT statements on a SQLite database table yields significant speedups. However, this requires that the entire table be selected and transformed from the B-Tree to row-column format. This paper investigates possible speedups by traversing B+ Trees in parallel on the GPU, avoiding the overhead of selecting the entire table to transform it into row-column format and leveraging the logarithmic nature of tree searches. We experiment with different input sizes, different orders of the B+ Tree, and batch multiple queries together to find optimal speedups for SELECT statements with single search parameters as well as range searches. We additionally make a comparison to a simple GPU brute force algorithm on a row-column version of the B+ Tree.",
"title": ""
},
{
"docid": "f2f2b48cd35d42d7abc6936a56aa580d",
"text": "Complete enumeration of all the sequences to establish global optimality is not feasible as the search space, for a general job-shop scheduling problem, ΠG has an upper bound of (n!). Since the early fifties a great deal of research attention has been focused on solving ΠG, resulting in a wide variety of approaches such as Branch and Bound, Simulated Annealing, Tabu Search, etc. However limited success has been achieved by these methods due to the shear intractability of this generic scheduling problem. Recently, much effort has been concentrated on using neural networks to solve ΠG as they are capable of adapting to new environments with little human intervention and can mimic thought processes. Major contributions in solving ΠG using a Hopfield neural network, as well as applications of back-error propagation to general scheduling problems are presented. To overcome the deficiencies in these applications a modified back-error propagation model, a simple yet powerful parallel architecture which can be successfully simulated on a personal computer, is applied to solve ΠG.",
"title": ""
},
{
"docid": "6e4c0b8625363e9acbe91c149af2c037",
"text": "OBJECTIVE\nThe present study assessed the effect of smoking on clinical, microbiological and immunological parameters in an experimental gingivitis model.\n\n\nMATERIAL AND METHODS\nTwenty-four healthy dental students were divided into two groups: smokers (n = 10); and nonsmokers (n = 14). Stents were used to prevent biofilm removal during brushing. Visible plaque index (VPI) and gingival bleeding index (GBI) were determined 5- on day -7 (running phase), baseline, 21 d (experimental gingivitis) and 28 d (resolution phase). Supragingival biofilm and gingival crevicular fluid were collected and assayed by checkerboard DNA-DNA hybridization and a multiplex analysis, respectively. Intragroup comparison was performed by Friedman and Dunn's multiple comparison tests, whereas the Mann-Whitney U-test was applied for intergroup analyses.\n\n\nRESULTS\nCessation of oral hygiene resulted in a significant increase in VPI, GBI and gingival crevicular fluid volume in both groups, which returned to baseline levels 7 d after oral hygiene was resumed. Smokers presented lower GBI than did nonsmokers (p < 0.05) at day 21. Smokers had higher total bacterial counts and higher proportions of red- and orange complex bacteria, as well as lower proportions of Actinomyces spp., and of purple- and yellow-complex bacteria (p < 0.05). Furthermore, the levels of key immune-regulatory cytokines, including interleukin (IL)-8, IL-17 and interferon-γ, were higher in smokers than in nonsmokers (p < 0.05).\n\n\nCONCLUSION\nSmokers and nonsmokers developed gingival inflammation after supragingival biofilm accumulation, but smokers had less bleeding, higher proportions of periodontal pathogens and distinct host-response patterns during the course of experimental gingivitis.",
"title": ""
},
{
"docid": "567445f68597ea8ff5e89719772819be",
"text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.",
"title": ""
},
{
"docid": "110b0837952be3e0aa01f4859190a116",
"text": "Automatic recommendation has become a popular research field: it allows the user to discover items that match their tastes. In this paper, we proposed an expanded autoencoder recommendation framework. The stacked autoencoders model is employed to extract the feature of input then reconstitution the input to do the recommendation. Then the side information of items and users is blended in the framework and the Huber function based regularization is used to improve the recommendation performance. The proposed recommendation framework is applied on the movie recommendation. Experimental results on a public database in terms of quantitative assessment show significant improvements over conventional methods.",
"title": ""
},
{
"docid": "29a2c5082cf4db4f4dde40f18c88ca85",
"text": "Human astrocytes are larger and more complex than those of infraprimate mammals, suggesting that their role in neural processing has expanded with evolution. To assess the cell-autonomous and species-selective properties of human glia, we engrafted human glial progenitor cells (GPCs) into neonatal immunodeficient mice. Upon maturation, the recipient brains exhibited large numbers and high proportions of both human glial progenitors and astrocytes. The engrafted human glia were gap-junction-coupled to host astroglia, yet retained the size and pleomorphism of hominid astroglia, and propagated Ca2+ signals 3-fold faster than their hosts. Long-term potentiation (LTP) was sharply enhanced in the human glial chimeric mice, as was their learning, as assessed by Barnes maze navigation, object-location memory, and both contextual and tone fear conditioning. Mice allografted with murine GPCs showed no enhancement of either LTP or learning. These findings indicate that human glia differentially enhance both activity-dependent plasticity and learning in mice.",
"title": ""
},
{
"docid": "d4f806a58d4cdc59cae675a765d4c6bc",
"text": "Our study examines whether ownership structure and boardroom characteristics have an effect on corporate financial fraud in China. The data come from the enforcement actions of the Chinese Securities Regulatory Commission (CSRC). The results from univariate analyses, where we compare fraud and nofraud firms, show that ownership and board characteristics are important in explaining fraud. However, using a bivariate probit model with partial observability we demonstrate that boardroom characteristics are important, while the type of owner is less relevant. In particular, the proportion of outside directors, the number of board meetings, and the tenure of the chairman are associated with the incidence of fraud. Our findings have implications for the design of appropriate corporate governance systems for listed firms. Moreover, our results provide information that can inform policy debates within the CSRC. D 2005 Elsevier B.V. All rights reserved. JEL classification: G34",
"title": ""
},
{
"docid": "40a6cc06e0e90fba161bc8bc8ec6446d",
"text": "Toxic comment classification has become an active research field with many recently proposed approaches. However, while these approaches address some of the task’s challenges others still remain unsolved and directions for further research are needed. To this end, we compare different deep learning and shallow approaches on a new, large comment dataset and propose an ensemble that outperforms all individual models. Further, we validate our findings on a second dataset. The results of the ensemble enable us to perform an extensive error analysis, which reveals open challenges for state-of-the-art methods and directions towards pending future research. These challenges include missing paradigmatic context and inconsistent dataset labels.",
"title": ""
},
{
"docid": "35dacb4b15e5c8fbd91cee6da807799a",
"text": "Stochastic gradient algorithms have been the main focus of large-scale learning problems and led to important successes in machine learning. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose a new adaptive learning rate algorithm, which utilizes curvature information for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.",
"title": ""
},
{
"docid": "6ceab65cc9505cf21824e9409cf67944",
"text": "Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on largescale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in terms of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level",
"title": ""
},
{
"docid": "17676785398d4ed24cc04cb3363a7596",
"text": "Generative models (GMs) such as Generative Adversary Network (GAN) and Variational Auto-Encoder (VAE) have thrived these years and achieved high quality results in generating new samples. Especially in Computer Vision, GMs have been used in image inpainting, denoising and completion, which can be treated as the inference from observed pixels to corrupted pixels. However, images are hierarchically structured which are quite different from many real-world inference scenarios with non-hierarchical features. These inference scenarios contain heterogeneous stochastic variables and irregular mutual dependences. Traditionally they are modeled by Bayesian Network (BN). However, the learning and inference of BN model are NP-hard thus the number of stochastic variables in BN is highly constrained. In this paper, we adapt typical GMs to enable heterogeneous learning and inference in polynomial time. We also propose an extended autoregressive (EAR) model and an EAR with adversary loss (EARA) model and give theoretical results on their effectiveness. Experiments on several BN datasets show that our proposed EAR model achieves the best performance in most cases compared to other GMs. Except for black box analysis, we’ve also done a serial of experiments on Markov border inference of GMs for white box analysis and give theoretical results.",
"title": ""
},
{
"docid": "0300e887815610a2f7d26994d027fe78",
"text": "This paper presents a computer vision based method for bar code reading. Bar code's geometric features and the imaging system parameters are jointly extracted from a tilted low resolution bar code image. This approach enables the use of cost effective cameras, increases the depth of acquisition, and provides solutions for cases where image quality is low. The performance of the algorithm is tested on synthetic and real test images, and extension to a 2D bar code (PDF417) is also discussed.",
"title": ""
},
{
"docid": "84b018fa45e06755746309014854bb9a",
"text": "For years, ontologies have been known in computer science as consensual models of domains of discourse, usually implemented as formal definitions of the relevant conceptual entities. Researchers have written much about the potential benefits of using them, and most of us regard ontologies as central building blocks of the semantic Web and other semantic systems. Unfortunately, the number and quality of actual, \"non-toy\" ontologies available on the Web today is remarkably low. This implies that the semantic Web community has yet to build practically useful ontologies for a lot of relevant domains in order to make the semantic Web a reality. Theoretically minded advocates often assume that the lack of ontologies is because the \"stupid business people haven't realized ontologies' enormous benefits.\" As a liberal market economist, the author assumes that humans can generally figure out what's best for their well-being, at least in the long run, and that they act accordingly. In other words, the fact that people haven't yet created as many useful ontologies as the ontology research community would like might indicate either unresolved technical limitations or the existence of sound rationales for why individuals refrain from building them - or both. Indeed, several social and technical difficulties exist that put a brake on developing and eventually constrain the space of possible ontologies",
"title": ""
},
{
"docid": "dd723b23b4a7d702f8d34f15b5c90107",
"text": "Smartphones have become a prominent part of our technology driven world. When it comes to uncovering, analyzing and submitting evidence in today's criminal investigations, mobile phones play a more critical role. Thus, there is a strong need for software tools that can help investigators in the digital forensics field effectively analyze smart phone data to solve crimes.\n This paper will accentuate how digital forensic tools assist investigators in getting data acquisition, particularly messages, from applications on iOS smartphones. In addition, we will lay out the framework how to build a tool for verifying data integrity for any digital forensics tool.",
"title": ""
}
] | scidocsrr |
b299604767a625ea5384e321d2bb238d | Generalized Thompson sampling for sequential decision-making and causal inference | [
{
"docid": "3734fd47cf4e4e5c00f660cbb32863f0",
"text": "We describe a new Bayesian click-through rate (CTR) prediction algorithm used for Sponsored Search in Microsoft’s Bing search engine. The algorithm is based on a probit regression model that maps discrete or real-valued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored search where the predictions made by the algorithm decide about future training sample composition. Finally, we show experimental results from the production system and compare to a calibrated Naïve Bayes algorithm.",
"title": ""
}
] | [
{
"docid": "c39b143861d1e0c371ec1684bb29f4cc",
"text": "Data races are a particularly unpleasant kind of threading bugs. They are hard to find and reproduce -- you may not observe a bug during the entire testing cycle and will only see it in production as rare unexplainable failures. This paper presents ThreadSanitizer -- a dynamic detector of data races. We describe the hybrid algorithm (based on happens-before and locksets) used in the detector. We introduce what we call dynamic annotations -- a sort of race detection API that allows a user to inform the detector about any tricky synchronization in the user program. Various practical aspects of using ThreadSanitizer for testing multithreaded C++ code at Google are also discussed.",
"title": ""
},
{
"docid": "60922247ab6ec494528d3a03c0909231",
"text": "This paper proposes a new \"zone controlled induction heating\" (ZCIH) system. The ZCIH system consists of two or more sets of a high-frequency inverter and a split work coil, which adjusts the coil current amplitude in each zone independently. The ZCIH system has capability of controlling the exothermic distribution on the work piece to avoid the strain caused by a thermal expansion. As a result, the ZCIH system enables a rapid heating performance as well as an temperature uniformity. This paper proposes current phase control making the coil current in phase with each other, to adjust the coil current amplitude even when a mutual inductance exists between the coils. This paper presents operating principle, theoretical analysis, and experimental results obtained from a laboratory setup and a six-zone prototype for a semiconductor processing.",
"title": ""
},
{
"docid": "c1f803e02ea7d6ef3bf6644e3aa17862",
"text": "Recurrent neural networks are prime candidates for learning evolutions in multi-dimensional time series data. The performance of such a network is judged by the loss function, which is aggregated into a scalar value that decreases during training. Observing only this number hides the variation that occurs within the typically large training and testing data sets. Understanding these variations is of highest importance to adjust network hyperparameters, such as the number of neurons, number of layers or to adjust the training set to include more representative examples. In this paper, we design a comprehensive and interactive system that allows users to study the output of recurrent neural networks on both the complete training data and testing data. We follow a coarse-to-fine strategy, providing overviews of annual, monthly and daily patterns in the time series and directly support a comparison of different hyperparameter settings. We applied our method to a recurrent convolutional neural network that was trained and tested on 25 years of climate data to forecast meteorological attributes, such as temperature, pressure and wind velocity. We further visualize the quality of the forecasting models, when applied to various locations on Earth and we examine the combination of several forecasting models. This is the authors preprint. The definitive version is available at http://diglib.eg.org/ and http://onlinelibrary.wiley.com/.",
"title": ""
},
{
"docid": "e141b36a3e257c4b8155cdf0682a0143",
"text": "Major depressive disorder is a common mental disorder that affects almost 7% of the adult U.S. population. The 2017 Audio/Visual Emotion Challenge (AVEC) asks participants to build a model to predict depression levels based on the audio, video, and text of an interview ranging between 7-33 minutes. Since averaging features over the entire interview will lose most temporal information, how to discover, capture, and preserve useful temporal details for such a long interview are significant challenges. Therefore, we propose a novel topic modeling based approach to perform context-aware analysis of the recording. Our experiments show that the proposed approach outperforms context-unaware methods and the challenge baselines for all metrics.",
"title": ""
},
{
"docid": "d79b440e5417fae517286206394e8685",
"text": "When using plenoptic camera for digital refocusing, angular undersampling can cause severe (angular) aliasing artifacts. Previous approaches have focused on avoiding aliasing by pre-processing the acquired light field via prefiltering, demosaicing, reparameterization, etc. In this paper, we present a different solution that first detects and then removes aliasing at the light field refocusing stage. Different from previous frequency domain aliasing analysis, we carry out a spatial domain analysis to reveal whether the aliasing would occur and uncover where in the image it would occur. The spatial analysis also facilitates easy separation of the aliasing vs. non-aliasing regions and aliasing removal. Experiments on both synthetic scene and real light field camera array data sets demonstrate that our approach has a number of advantages over the classical prefiltering and depth-dependent light field rendering techniques.",
"title": ""
},
{
"docid": "67978cd2f94cabb45c1ea2c571cef4de",
"text": "Studies identifying oil shocks using structural vector autoregressions (VARs) reach different conclusions on the relative importance of supply and demand factors in explaining oil market fluctuations. This disagreement is due to different assumptions on the oil supply and demand elasticities that determine the identification of the oil shocks. We provide new estimates of oil-market elasticities by combining a narrative analysis of episodes of large drops in oil production with country-level instrumental variable regressions. When the estimated elasticities are embedded into a structural VAR, supply and demand shocks play an equally important role in explaining oil prices and oil quantities. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "3ee3cf039b1bc03d6b6e504ae87fc62f",
"text": "Objective: This paper tackles the problem of transfer learning in the context of electroencephalogram (EEG)-based brain–computer interface (BCI) classification. In particular, the problems of cross-session and cross-subject classification are considered. These problems concern the ability to use data from previous sessions or from a database of past users to calibrate and initialize the classifier, allowing a calibration-less BCI mode of operation. Methods: Data are represented using spatial covariance matrices of the EEG signals, exploiting the recent successful techniques based on the Riemannian geometry of the manifold of symmetric positive definite (SPD) matrices. Cross-session and cross-subject classification can be difficult, due to the many changes intervening between sessions and between subjects, including physiological, environmental, as well as instrumental changes. Here, we propose to affine transform the covariance matrices of every session/subject in order to center them with respect to a reference covariance matrix, making data from different sessions/subjects comparable. Then, classification is performed both using a standard minimum distance to mean classifier, and through a probabilistic classifier recently developed in the literature, based on a density function (mixture of Riemannian Gaussian distributions) defined on the SPD manifold. Results: The improvements in terms of classification performances achieved by introducing the affine transformation are documented with the analysis of two BCI datasets. Conclusion and significance: Hence, we make, through the affine transformation proposed, data from different sessions and subject comparable, providing a significant improvement in the BCI transfer learning problem.",
"title": ""
},
{
"docid": "12adb5e324d971d2c752f2193cec3126",
"text": "Despite recent excitement generated by the P2P paradigm and despite surprisingly fast deployment of some P2P applications, there are few quantitative evaluations of P2P systems behavior. Due to its open architecture and achieved scale, Gnutella is an interesting P2P architecture case study. Gnutella, like most other P2P applications, builds at the application level a virtual network with its own routing mechanisms. The topology of this overlay network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We built a ‘crawler’ to extract the topology of Gnutella’s application level network, we analyze the topology graph and evaluate generated network traffic. We find that although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure. These findings lead us to propose changes to Gnutella protocol and implementations that bring significant performance and scalability improvements.",
"title": ""
},
{
"docid": "87b5c0021e513898693e575ca5479757",
"text": "We present a statistical mechanics model of deep feed forward neural networks (FFN). Our energy-based approach naturally explains several known results and heuristics, providing a solid theoretical framework and new instruments for a systematic development of FFN. We infer that FFN can be understood as performing three basic steps: encoding, representation validation and propagation. We obtain a set of natural activations – such as sigmoid, tanh and ReLu – together with a state-of-the-art one, recently obtained by Ramachandran et al. [1] using an extensive search algorithm. We term this activation ESP (Expected Signal Propagation), explain its probabilistic meaning, and study the eigenvalue spectrum of the associated Hessian on classification tasks. We find that ESP allows for faster training and more consistent performances over a wide range of network architectures.",
"title": ""
},
{
"docid": "b10447097f8d513795b4f4e08e1838d8",
"text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.",
"title": ""
},
{
"docid": "664b003cedbca63ebf775bd9f062b8f1",
"text": "Since 1900, soil organic matter (SOM) in farmlands worldwide has declined drastically as a result of carbon turnover and cropping systems. Over the past 17 years, research trials were established to evaluate the efficacy of different commercial humates products on potato production. Data from humic acid (HA) trials showed that different cropping systems responded differently to different products in relation to yield and quality. Important qualifying factors included: source; concentration; processing; chelating or complexing capacity of the humic acid products; functional groups (Carboxyl; Phenol; Hydroxyl; Ketone; Ester; Ether; Amine), rotation and soil quality factors; consistency of the product in enhancing yield and quality of potato crops; mineralization effect; and influence on fertilizer use efficiency. Properties of humic substances, major constituents of soil organic matter, include chelation, mineralization, buffer effect, clay mineral-organic interaction, and cation exchange. Humates increase phosphorus availability by complexing ions into stable compounds, allowing the phosphorus ion to remain exchangeable for plants’ uptake. Collectively, the consistent use of good quality products in our replicated research plots in different years resulted in a yield increase from 11.4% to the maximum of 22.3%. Over the past decade, there has been a major increase in the quality of research and development of organic and humic acid products by some well-established manufacturers. Our experimentations with these commercial products showed an increase in the yield and quality of crops.",
"title": ""
},
{
"docid": "03dc23b2556e21af9424500e267612bb",
"text": "File fragment classification is an important and difficult problem in digital forensics. Previous works in this area mainly relied on specific byte sequences in file headers and footers, or statistical analysis and machine learning algorithms on data from the middle of the file. This paper introduces a new approach to classify file fragment based on grayscale image. The proposed method treats a file fragment as a grayscale image, and uses image classification method to classify file fragment. Furthermore, two models based on file-unbiased and type-unbiased are proposed to verify the validity of the proposed method. Compared with previous works, the experimental results are promising. An average classification accuracy of 39.7% in file-unbiased model and 54.7% in type-unbiased model are achieved on 29 file types.",
"title": ""
},
{
"docid": "ddd09bc1c5b16e273bb9d1eaeae1a7e8",
"text": "In this paper, we study concurrent beamforming issue for achieving high capacity in indoor millimeter-wave (mmWave) networks. The general concurrent beamforming issue is first formulated as an optimization problem to maximize the sum rates of concurrent transmissions, considering the mutual interference. To reduce the complexity of beamforming and the total setup time, concurrent beamforming is decomposed into multiple single-link beamforming, and an iterative searching algorithm is proposed to quickly achieve the suboptimal transmission/reception beam sets. A codebook-based beamforming protocol at medium access control (MAC) layer is then introduced in a distributive manner to determine the beam sets. Both analytical and simulation results demonstrate that the proposed protocol can drastically reduce total setup time, increase system throughput, and improve energy efficiency.",
"title": ""
},
{
"docid": "2dda75184e2c9c5507c75f84443fff08",
"text": "Text classification can help users to effectively handle and exploit useful information hidden in large-scale documents. However, the sparsity of data and the semantic sensitivity to context often hinder the classification performance of short texts. In order to overcome the weakness, we propose a unified framework to expand short texts based on word embedding clustering and convolutional neural network (CNN). Empirically, the semantically related words are usually close to each other in embedding spaces. Thus, we first discover semantic cliques via fast clustering. Then, by using additive composition over word embeddings from context with variable window width, the representations of multi-scale semantic units1 in short texts are computed. In embedding spaces, the restricted nearest word embeddings (NWEs)2 of the semantic units are chosen to constitute expanded matrices, where the semantic cliques are used as supervision information. Finally, for a short text, the projected matrix 3 and expanded matrices are combined and fed into CNN in parallel. Experimental results on two open benchmarks validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "a8bd9e8470ad414c38f5616fb14d433d",
"text": "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Airoldi et al. (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model.",
"title": ""
},
{
"docid": "545a7a98c79d14ba83766aa26cff0291",
"text": "Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.",
"title": ""
},
{
"docid": "a15c94c0ec40cb8633d7174b82b70a16",
"text": "Koenigs, Young and colleagues [1] recently tested patients with emotion-related damage in the ventromedial prefrontal cortex (VMPFC) usingmoral dilemmas used in previous neuroimaging studies [2,3]. These patients made unusually utilitarian judgments (endorsing harmful actions that promote the greater good). My collaborators and I have proposed a dual-process theory of moral judgment [2,3] that we claim predicts this result. In a Research Focus article published in this issue of Trends in Cognitive Sciences, Moll and de Oliveira-Souza [4] challenge this interpretation. Our theory aims to explain some puzzling patterns in commonsense moral thought. For example, people usually approve of diverting a runaway trolley thatmortally threatens five people onto a side-track, where it will kill only one person. And yet people usually disapprove of pushing someone in front of a runaway trolley, where this will kill the person pushed, but save five others [5]. Our theory, in a nutshell, is this: the thought of pushing someone in front of a trolley elicits a prepotent, negative emotional response (supported in part by the medial prefrontal cortex) that drives moral disapproval [2,3]. People also engage in utilitarian moral reasoning (aggregate cost–benefit analysis), which is likely subserved by the dorsolateral prefrontal cortex (DLPFC) [2,3]. When there is no prepotent emotional response, utilitarian reasoning prevails (as in the first case), but sometimes prepotent emotions and utilitarian reasoning conflict (as in the second case). This conflict is detected by the anterior cingulate cortex, which signals the need for cognitive control, to be implemented in this case by the anterior DLPFC [Brodmann’s Areas (BA) 10/46]. Overriding prepotent emotional responses requires additional cognitive control and, thus, we find increased activity in the anterior DLPFC when people make difficult utilitarian moral judgments [3]. More recent studies support this theory: if negative emotions make people disapprove of pushing the man to his death, then inducing positive emotion might lead to more utilitarian approval, and this is indeed what happens [6]. Likewise, patients with frontotemporal dementia (known for their ‘emotional blunting’) should more readily approve of pushing the man in front of the trolley, and they do [7]. This finding directly foreshadows the hypoemotional VMPFC patients’ utilitarian responses to this and other cases [1]. Finally, we’ve found that cognitive load selectively interferes with utilitarian moral judgment,",
"title": ""
},
{
"docid": "343c71c6013c5684b8860c4386b34526",
"text": "This paper seeks to analyse the extent to which organizations can learn from projects by focusing on the relationship between projects and their organizational context. The paper highlights three dimensions of project-based learning: the practice-based nature of learning, project autonomy and knowledge integration. This analysis generates a number of propositions on the relationship between the learning generated within projects and its transfer to other parts of the organization. In particular, the paper highlights the ‘learning boundaries’ which emerge when learning within projects creates new divisions in practice. These propositions are explored through a comparative analysis of two case studies of construction projects. This analysis suggests that the learning boundaries which develop around projects reflect the nested nature of learning, whereby different levels of learning may substitute for each other. Learning outcomes in the cases can thus be analysed in terms of the interplay between organizational learning and project-level learning. The paper concludes that learning boundaries are an important constraint on attempts to exploit the benefits of projectbased learning for the wider organization.",
"title": ""
},
{
"docid": "5ec8b094cbbbfbbc0632d85b32255c49",
"text": "Pyramidal neurons are characterized by their distinct apical and basal dendritic trees and the pyramidal shape of their soma. They are found in several regions of the CNS and, although the reasons for their abundance remain unclear, functional studies — especially of CA1 hippocampal and layer V neocortical pyramidal neurons — have offered insights into the functions of their unique cellular architecture. Pyramidal neurons are not all identical, but some shared functional principles can be identified. In particular, the existence of dendritic domains with distinct synaptic inputs, excitability, modulation and plasticity appears to be a common feature that allows synapses throughout the dendritic tree to contribute to action-potential generation. These properties support a variety of coincidence-detection mechanisms, which are likely to be crucial for synaptic integration and plasticity.",
"title": ""
}
] | scidocsrr |
3d64739572b4db24f15ed648fc62cdd5 | An Empirical Evaluation of Similarity Measures for Time Series Classification | [
{
"docid": "ceca5552bcb7a5ebd0b779737bc68275",
"text": "In a way similar to the string-to-string correction problem, we address discrete time series similarity in light of a time-series-to-time-series-correction problem for which the similarity between two time series is measured as the minimum cost sequence of edit operations needed to transform one time series into another. To define the edit operations, we use the paradigm of a graphical editing process and end up with a dynamic programming algorithm that we call time warp edit distance (TWED). TWED is slightly different in form from dynamic time warping (DTW), longest common subsequence (LCSS), or edit distance with real penalty (ERP) algorithms. In particular, it highlights a parameter that controls a kind of stiffness of the elastic measure along the time axis. We show that the similarity provided by TWED is a potentially useful metric in time series retrieval applications since it could benefit from the triangular inequality property to speed up the retrieval process while tuning the parameters of the elastic measure. In that context, a lower bound is derived to link the matching of time series into down sampled representation spaces to the matching into the original space. The empiric quality of the TWED distance is evaluated on a simple classification task. Compared to edit distance, DTW, LCSS, and ERP, TWED has proved to be quite effective on the considered experimental task.",
"title": ""
},
{
"docid": "510a43227819728a77ff0c7fa06fa2d0",
"text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.",
"title": ""
}
] | [
{
"docid": "1349c5daedd71bdfccaa0ea48b3fd54a",
"text": "OBJECTIVE\nCraniosacral therapy (CST) is an alternative treatment approach, aiming to release restrictions around the spinal cord and brain and subsequently restore body function. A previously conducted systematic review did not obtain valid scientific evidence that CST was beneficial to patients. The aim of this review was to identify and critically evaluate the available literature regarding CST and to determine the clinical benefit of CST in the treatment of patients with a variety of clinical conditions.\n\n\nMETHODS\nComputerised literature searches were performed in Embase/Medline, Medline(®) In-Process, The Cochrane library, CINAHL, and AMED from database start to April 2011. Studies were identified according to pre-defined eligibility criteria. This included studies describing observational or randomised controlled trials (RCTs) in which CST as the only treatment method was used, and studies published in the English language. The methodological quality of the trials was assessed using the Downs and Black checklist.\n\n\nRESULTS\nOnly seven studies met the inclusion criteria, of which three studies were RCTs and four were of observational study design. Positive clinical outcomes were reported for pain reduction and improvement in general well-being of patients. Methodological Downs and Black quality scores ranged from 2 to 22 points out of a theoretical maximum of 27 points, with RCTs showing the highest overall scores.\n\n\nCONCLUSION\nThis review revealed the paucity of CST research in patients with different clinical pathologies. CST assessment is feasible in RCTs and has the potential of providing valuable outcomes to further support clinical decision making. However, due to the current moderate methodological quality of the included studies, further research is needed.",
"title": ""
},
{
"docid": "1de19775f0c32179f59674c7f0d8b540",
"text": "As the most commonly used bots in first-person shooter (FPS) online games, aimbots are notoriously difficult to detect because they are completely passive and resemble excellent honest players in many aspects. In this paper, we conduct the first field measurement study to understand the status quo of aimbots and how they play in the wild. For data collection purpose, we devise a novel and generic technique called baittarget to accurately capture existing aimbots from the two most popular FPS games. Our measurement reveals that cheaters who use aimbots cannot play as skillful as excellent honest players in all aspects even though aimbots can help them to achieve very high shooting performance. To characterize the unskillful and blatant nature of cheaters, we identify seven features, of which six are novel, and these features cannot be easily mimicked by aimbots. Leveraging this set of features, we propose an accurate and robust server-side aimbot detector called AimDetect. The core of AimDetect is a cascaded classifier that detects the inconsistency between performance and skillfulness of aimbots. We evaluate the efficacy and generality of AimDetect using the real game traces. Our results show that AimDetect can capture almost all of the aimbots with very few false positives and minor overhead.",
"title": ""
},
{
"docid": "961cc1dc7063706f8f66fc136da41661",
"text": "From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible \"statistical\" properties that are the object of learning. Much less attention has been given to defining what \"learning\" is in the context of \"statistical learning.\" One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.",
"title": ""
},
{
"docid": "3d56b369e10b29969132c44897d4cc4c",
"text": "Real-world object classes appear in imbalanced ratios. This poses a significant challenge for classifiers which get biased towards frequent classes. We hypothesize that improving the generalization capability of a classifier should improve learning on imbalanced datasets. Here, we introduce the first hybrid loss function that jointly performs classification and clustering in a single formulation. Our approach is based on an ‘affinity measure’ in Euclidean space that leads to the following benefits: (1) direct enforcement of maximum margin constraints on classification boundaries, (2) a tractable way to ensure uniformly spaced and equidistant cluster centers, (3) flexibility to learn multiple class prototypes to support diversity and discriminability in feature space. Our extensive experiments demonstrate the significant performance improvements on visual classification and verification tasks on multiple imbalanced datasets. The proposed loss can easily be plugged in any deep architecture as a differentiable block and demonstrates robustness against different levels of data imbalance and corrupted labels.",
"title": ""
},
{
"docid": "1ebf198459b98048404b706e4852eae2",
"text": "Network forensics is a branch of digital forensics, which applies to network security. It is used to relate monitoring and analysis of the computer network traffic, that helps us in collecting information and digital evidence, for the protection of network that can use as firewall and IDS. Firewalls and IDS can't always prevent and find out the unauthorized access within a network. This paper presents an extensive survey of several forensic frameworks. There is a demand of a system which not only detects the complex attack, but also it should be able to understand what had happened. Here it talks about the concept of the distributed network forensics. The concept of the Distributed network forensics is based on the distributed techniques, which are useful for providing an integrated platform for the automatic forensic evidence gathering and important data storage, valuable support and an attack attribution graph generation mechanism to depict hacking events.",
"title": ""
},
{
"docid": "fd0e31b2675a797c26af731ef1ff22df",
"text": "State representations critically affect the effectiveness of learning in robots. In this paper, we propose a roboticsspecific approach to learning such state representations. Robots accomplish tasks by interacting with the physical world. Physics in turn imposes structure on both the changes in the world and on the way robots can effect these changes. Using prior knowledge about interacting with the physical world, robots can learn state representations that are consistent with physics. We identify five robotic priors and explain how they can be used for representation learning. We demonstrate the effectiveness of this approach in a simulated slot car racing task and a simulated navigation task with distracting moving objects. We show that our method extracts task-relevant state representations from highdimensional observations, even in the presence of task-irrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning.",
"title": ""
},
{
"docid": "98b4e2d51efde6f4f8c43c29650b8d2f",
"text": "New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e.g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only nice to have but is in fact a necessary tool for designing embodied agents.",
"title": ""
},
{
"docid": "209203c297898a2251cfd62bdfc37296",
"text": "Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computerbased problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and differences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.",
"title": ""
},
{
"docid": "7735668d4f8407d9514211d9f5492ce6",
"text": "This revision to the EEG Guidelines is an update incorporating current EEG technology and practice. The role of the EEG in making the determination of brain death is discussed as are suggested technical criteria for making the diagnosis of electrocerebral inactivity.",
"title": ""
},
{
"docid": "e83227e0485cf7f3ba19ce20931bbc2f",
"text": "There has been an increased global demand for dermal filler injections in recent years. Although hyaluronic acid-based dermal fillers generally have a good safety profile, serious vascular complications have been reported. Here we present a typical case of skin necrosis following a nonsurgical rhinoplasty using hyaluronic acid filler. Despite various rescuing managements, unsightly superficial scars were left. It is critical for plastic surgeons and dermatologists to be familiar with the vascular anatomy and the staging of vascular complications. Any patients suspected to experience a vascular complication should receive early management under close monitoring. Meanwhile, the potentially devastating outcome caused by illegal practice calls for stricter regulations and law enforcement.",
"title": ""
},
{
"docid": "d559ace14dcc42f96d0a96b959a92643",
"text": "Graphs are an integral data structure for many parts of computation. They are highly effective at modeling many varied and flexible domains, and are excellent for representing the way humans themselves conceive of the world. Nowadays, there is lots of interest in working with large graphs, including social network graphs, “knowledge” graphs, and large bipartite graphs (for example, the Netflix movie matching graph).",
"title": ""
},
{
"docid": "f8093849e9157475149d00782c60ae60",
"text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.",
"title": ""
},
{
"docid": "9766e0507346e46e24790a4873979aa4",
"text": "Extreme learning machine (ELM) is proposed for solving a single-layer feed-forward network (SLFN) with fast learning speed and has been confirmed to be effective and efficient for pattern classification and regression in different fields. ELM originally focuses on the supervised, semi-supervised, and unsupervised learning problems, but just in the single domain. To our best knowledge, ELM with cross-domain learning capability in subspace learning has not been exploited very well. Inspired by a cognitive-based extreme learning machine technique (Cognit Comput. 6:376–390, 1; Cognit Comput. 7:263–278, 2.), this paper proposes a unified subspace transfer framework called cross-domain extreme learning machine (CdELM), which aims at learning a common (shared) subspace across domains. Three merits of the proposed CdELM are included: (1) A cross-domain subspace shared by source and target domains is achieved based on domain adaptation; (2) ELM is well exploited in the cross-domain shared subspace learning framework, and a new perspective is brought for ELM theory in heterogeneous data analysis; (3) the proposed method is a subspace learning framework and can be combined with different classifiers in recognition phase, such as ELM, SVM, nearest neighbor, etc. Experiments on our electronic nose olfaction datasets demonstrate that the proposed CdELM method significantly outperforms other compared methods.",
"title": ""
},
{
"docid": "9faf67646394dfedfef1b6e9152d9cf6",
"text": "Acoustic shooter localization systems are being rapidly deployed in the field. However, these are standalone systems---either wearable or vehicle-mounted---that do not have networking capability even though the advantages of widely distributed sensing for locating shooters have been demonstrated before. The reason for this is that certain disadvantages of wireless network-based prototypes made them impractical for the military. The system that utilized stationary single-channel sensors required many sensor nodes, while the multi-channel wearable version needed to track the absolute self-orientation of the nodes continuously, a notoriously hard task. This paper presents an approach that overcomes the shortcomings of past approaches. Specifically, the technique requires as few as five single-channel wireless sensors to provide accurate shooter localization and projectile trajectory estimation. Caliber estimation and weapon classification are also supported. In addition, a single node alone can provide reliable miss distance and range estimates based on a single shot as long as a reasonable assumption holds. The main contribution of the work and the focus of this paper is the novel sensor fusion technique that works well with a limited number of observations. The technique is thoroughly evaluated using an extensive shot library.",
"title": ""
},
{
"docid": "1b0cb70fb25d86443a01a313371a27ae",
"text": "We present a protocol for general state machine replication – a method that provides strong consistency – that has high performance in a wide-area network. In particular, our protocol Mencius has high throughput under high client load and low latency under low client load even under changing wide-area network environment and client load. We develop our protocol as a derivation from the well-known protocol Paxos. Such a development can be changed or further refined to take advantage of specific network or application requirements.",
"title": ""
},
{
"docid": "b36549a4b16c2c8ab50f1adda99f3120",
"text": "Spatial representations of time are a ubiquitous feature of human cognition. Nevertheless, interesting sociolinguistic variations exist with respect to where in space people locate temporal constructs. For instance, while in English time metaphorically flows horizontally, in Mandarin an additional vertical dimension is employed. Noting that the bilingual mind can flexibly accommodate multiple representations, the present work explored whether Mandarin-English bilinguals possess two mental time lines. Across two experiments, we demonstrated that Mandarin-English bilinguals do indeed employ both horizontal and vertical representations of time. Importantly, subtle variations to cultural context were seen to shape how these time lines were deployed.",
"title": ""
},
{
"docid": "41611606af8671f870fb90e50c2e99fc",
"text": "Pointwise label and pairwise label are both widely used in computer vision tasks. For example, supervised image classification and annotation approaches use pointwise label, while attribute-based image relative learning often adopts pairwise labels. These two types of labels are often considered independently and most existing efforts utilize them separately. However, pointwise labels in image classification and tag annotation are inherently related to the pairwise labels. For example, an image labeled with \"coast\" and annotated with \"beach, sea, sand, sky\" is more likely to have a higher ranking score in terms of the attribute \"open\", while \"men shoes\" ranked highly on the attribute \"formal\" are likely to be annotated with \"leather, lace up\" than \"buckle, fabric\". The existence of potential relations between pointwise labels and pairwise labels motivates us to fuse them together for jointly addressing related vision tasks. In particular, we provide a principled way to capture the relations between class labels, tags and attributes, and propose a novel framework PPP(Pointwise and Pairwise image label Prediction), which is based on overlapped group structure extracted from the pointwise-pairwise-label bipartite graph. With experiments on benchmark datasets, we demonstrate that the proposed framework achieves superior performance on three vision tasks compared to the state-of-the-art methods.",
"title": ""
},
{
"docid": "dc93d2204ff27c7d55a71e75d2ae4ca9",
"text": "Locating and securing an Alzheimer's patient who is outdoors and in wandering state is crucial to patient's safety. Although advances in geotracking and mobile technology have made locating patients instantly possible, reaching them while in wandering state may take time. However, a social network of caregivers may help shorten the time that it takes to reach and secure a wandering AD patient. This study proposes a new type of intervention based on novel mobile application architecture to form and direct a social support network of caregivers for locating and securing wandering patients as soon as possible. System employs, aside from the conventional tracking mechanism, a wandering detection mechanism, both of which operates through a tracking device installed a Subscriber Identity Module for Global System for Mobile Communications Network(GSM). System components are being implemented using Java. Family caregivers will be interviewed prior to and after the use of the system and Center For Epidemiologic Studies Depression Scale, Patient Health Questionnaire and Zarit Burden Interview will be applied to them during these interviews to find out the impact of the system in terms of depression, anxiety and burden, respectively.",
"title": ""
},
{
"docid": "acb3689c9ece9502897cebb374811f54",
"text": "In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.",
"title": ""
}
] | scidocsrr |
d0a0791c9c6f6d9ffdc2f4ebb05a8241 | Big Data Analysis in Smart Manufacturing: A Review | [
{
"docid": "c12fb39060ec4dd2c7bb447352ea4e8a",
"text": "Lots of data from different domains is published as Linked Open Data (LOD). While there are quite a few browsers for such data, as well as intelligent tools for particular purposes, a versatile tool for deriving additional knowledge by mining the Web of Linked Data is still missing. In this system paper, we introduce the RapidMiner Linked Open Data extension. The extension hooks into the powerful data mining and analysis platform RapidMiner, and offers operators for accessing Linked Open Data in RapidMiner, allowing for using it in sophisticated data analysis workflows without the need for expert knowledge in SPARQL or RDF. The extension allows for autonomously exploring the Web of Data by following links, thereby discovering relevant datasets on the fly, as well as for integrating overlapping data found in different datasets. As an example, we show how statistical data from the World Bank on scientific publications, published as an RDF data cube, can be automatically linked to further datasets and analyzed using additional background knowledge from ten different LOD datasets.",
"title": ""
},
{
"docid": "150e7a6f46e93fc917e43e32dedd9424",
"text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.",
"title": ""
}
] | [
{
"docid": "9f21792dbe89fa95d85e7210cf1de9c6",
"text": "Convolutional Neural Networks have provided state-of-the-art results in several computer vision problems. However, due to a large number of parameters in CNNs, they require a large number of training samples which is a limiting factor for small sample size problems. To address this limitation, we propose SSF-CNN which focuses on learning the \"structure\" and \"strength\" of filters. The structure of the filter is initialized using a dictionary based filter learning algorithm and the strength of the filter is learned using the small sample training data. The architecture provides the flexibility of training with both small and large training databases, and yields good accuracies even with small size training data. The effectiveness of the algorithm is first demonstrated on MNIST, CIFAR10, and NORB databases, with varying number of training samples. The results show that SSF-CNN significantly reduces the number of parameters required for training while providing high accuracies on the test databases. On small sample size problems such as newborn face recognition and Omniglot, it yields state-of-the-art results. Specifically, on the IIITD Newborn Face Database, the results demonstrate improvement in rank-1 identification accuracy by at least 10%.",
"title": ""
},
{
"docid": "f6ba46b72139f61cfb098656d71553ed",
"text": "This paper introduces the Voice Conversion Octave Toolbox made available to the public as open source. The first version of the toolbox features tools for VTLN-based voice conversion supporting a variety of warping functions. The authors describe the implemented functionality and how to configure the included tools.",
"title": ""
},
{
"docid": "792df318ee62c4e5409f53829c3de05c",
"text": "In this paper we present a novel technique to calibrate multiple casually aligned projectors on a fiducial-free cylindrical curved surface using a single camera. We impose two priors to the cylindrical display: (a) cylinder is a vertically extruded surface; and (b) the aspect ratio of the rectangle formed by the four corners of the screen is known. Using these priors, we can estimate the display's 3D surface geometry and camera extrinsic parameters using a single image without any explicit display to camera correspondences. Using the estimated camera and display properties, we design a novel deterministic algorithm to recover the intrinsic and extrinsic parameters of each projector using a single projected pattern seen by the camera which is then used to register the images on the display from any arbitrary viewpoint making it appropriate for virtual reality systems. Finally, our method can be extended easily to handle sharp corners — making it suitable for the common CAVE like VR setup. To the best of our knowledge, this is the first method that can achieve accurate geometric auto-calibration of multiple projectors on a cylindrical display without performing an extensive stereo reconstruction.",
"title": ""
},
{
"docid": "c663806c6b086b31e57a9d7e54a46d4b",
"text": "Deep neural networks are frequently used for computer vision, speech recognition and text processing. The reason is their ability to regress highly nonlinear functions. We present an end-to-end controller for steering autonomous vehicles based on a convolutional neural network (CNN). The deployed framework does not require explicit hand-engineered algorithms for lane detection, object detection or path planning. The trained neural net directly maps pixel data from a front-facing camera to steering commands and does not require any other sensors. We compare the controller performance with the steering behavior of a human driver.",
"title": ""
},
{
"docid": "53a1d344a6e38dd790e58c6952e51cdb",
"text": "The thermal conductivities of individual single crystalline intrinsic Si nanowires with diameters of 22, 37, 56, and 115 nm were measured using a microfabricated suspended device over a temperature range of 20–320 K. Although the nanowires had well-defined crystalline order, the thermal conductivity observed was more than two orders of magnitude lower than the bulk value. The strong diameter dependence of thermal conductivity in nanowires was ascribed to the increased phonon-boundary scattering and possible phonon spectrum modification. © 2003 American Institute of Physics.@DOI: 10.1063/1.1616981 #",
"title": ""
},
{
"docid": "f1744cf87ee2321c5132d6ee30377413",
"text": "How do movements in the distribution of income and wealth affect the macroeconomy? We analyze this question using a calibrated version of the stochastic growth model with partially uninsurable idiosyncratic risk and movements in aggregate productivity. Our main finding is that, in the stationary stochastic equilibrium, the behavior of the macroeconomic aggregates can be almost perfectly described using only the mean of the wealth distribution. This result is robust to substantial changes in both parameter values and model specification. Our benchmark model, whose only difference from the representative-agent framework is the existence of uninsurable idiosyncratic risk, displays far less cross-sectional dispersion",
"title": ""
},
{
"docid": "94c9eec9aa4f36bf6b2d83c3cc8dbb12",
"text": "Many real world security problems can be modelled as finite zero-sum games with structured sequential strategies and limited interactions between the players. An abstract class of games unifying these models are the normal-form games with sequential strategies (NFGSS). We show that all games from this class can be modelled as well-formed imperfect-recall extensiveform games and consequently can be solved by counterfactual regret minimization. We propose an adaptation of the CFR algorithm for NFGSS and compare its performance to the standard methods based on linear programming and incremental game generation. We validate our approach on two security-inspired domains. We show that with a negligible loss in precision, CFR can compute a Nash equilibrium with five times less computation than its competitors. Game theory has been recently used to model many real world security problems, such as protecting airports (Pita et al. 2008) or airplanes (Tsai et al. 2009) from terrorist attacks, preventing fare evaders form misusing public transport (Yin et al. 2012), preventing attacks in computer networks (Durkota et al. 2015), or protecting wildlife from poachers (Fang, Stone, and Tambe 2015). Many of these security problems are sequential in nature. Rather than a single monolithic action, the players’ strategies are formed by sequences of smaller individual decisions. For example, the ticket inspectors make a sequence of decisions about where to check tickets and which train to take; a network administrator protects the network against a sequence of actions an attacker uses to penetrate deeper into the network. Sequential decision making in games has been extensively studied from various perspectives. Recent years have brought significant progress in solving massive imperfectinformation extensive-form games with a focus on the game of poker. Counterfactual regret minimization (Zinkevich et al. 2008) is the family of algorithms that has facilitated much of this progress, with a recent incarnation (Tammelin et al. 2015) essentially solving for the first time a variant of poker commonly played by people (Bowling et al. 2015). However, there has not been any transfer of these results to research on real world security problems. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. We focus on an abstract class of sequential games that can model many sequential security games, such as games taking place in physical space that can be discretized as a graph. This class of games is called normal-form games with sequential strategies (NFGSS) (Bosansky et al. 2015) and it includes, for example, existing game theoretic models of ticket inspection (Jiang et al. 2013), border patrolling (Bosansky et al. 2015), and securing road networks (Jain et al. 2011). In this work we formally prove that any NFGSS can be modelled as a slightly generalized chance-relaxed skew well-formed imperfect-recall game (CRSWF) (Lanctot et al. 2012; Kroer and Sandholm 2014), a subclass of extensiveform games with imperfect recall in which counterfactual regret minimization is guaranteed to converge to the optimal strategy. We then show how to adapt the recent variant of the algorithm, CFR, directly to NFGSS and present experimental validation on two distinct domains modelling search games and ticket inspection. We show that CFR is applicable and efficient in domains with imperfect recall that are substantially different from poker. Moreover, if we are willing to sacrifice a negligible degree of approximation, CFR can find a solution substantially faster than methods traditionally used in research on security games, such as formulating the game as a linear program (LP) and incrementally building the game model by double oracle methods.",
"title": ""
},
{
"docid": "800dc3e6a3f58d2af1ed7cd526074d54",
"text": "The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at the same time remove any redundancies among the features to fully utilize the capacity of the network. Specifically, we propose to use an exclusive sparsity regularization based on (1, 2)-norm, which promotes competition for features between different weights, thus enforcing them to fit to disjoint sets of features. We further combine the exclusive sparsity with the group sparsity based on (2, 1)-norm, to promote both sharing and competition for features in training of a deep neural network. We validate our method on multiple public datasets, and the results show that our method can obtain more compact and efficient networks while also improving the performance over the base networks with full weights, as opposed to existing sparsity regularizations that often obtain efficiency at the expense of prediction accuracy.",
"title": ""
},
{
"docid": "b61042f2d5797e57e2bc395966bb7ad2",
"text": "A number of classifier fusion methods have been recently developed opening an alternative approach leading to a potential improvement in the classification performance. As there is little theory of information fusion itself, currently we are faced with different methods designed for different problems and producing different results. This paper gives an overview of classifier fusion methods and attempts to identify new trends that may dominate this area of research in future. A taxonomy of fusion methods trying to bring some order into the existing “pudding of diversities” is also provided.",
"title": ""
},
{
"docid": "83ba1d7915fc7cb73c86172970b1979e",
"text": "This paper presents a new modeling methodology accounting for generation and propagation of minority carriers that can be used directly in circuit-level simulators in order to estimate coupled parasitic currents. The method is based on a new compact model of basic components (p-n junction and resistance) and takes into account minority carriers at the boundary. An equivalent circuit schematic of the substrate is built by identifying these basic elements in the substrate and interconnecting them. Parasitic effects such as bipolar or latch-up effects result from the continuity of minority carriers guaranteed by the components' models. A structure similar to a half-bridge perturbing sensitive n-wells has been simulated. It is composed by four p-n junctions connected together by their common p-doped sides. The results are in good agreement with those obtained from physical device simulations.",
"title": ""
},
{
"docid": "7543281174d7dc63e180249d94ad6c07",
"text": "Enriching speech recognition output with sentence boundaries improves its human readability and enables further processing by downstream language processing modules. We have constructed a hidden Markov model (HMM) system to detect sentence boundaries that uses both prosodic and textual information. Since there are more nonsentence boundaries than sentence boundaries in the data, the prosody model, which is implemented as a decision tree classifier, must be constructed to effectively learn from the imbalanced data distribution. To address this problem, we investigate a variety of sampling approaches and a bagging scheme. A pilot study was carried out to select methods to apply to the full NIST sentence boundary evaluation task across two corpora (conversational telephone speech and broadcast news speech), using both human transcriptions and recognition output. In the pilot study, when classification error rate is the performance measure, using the original training set achieves the best performance among the sampling methods, and an ensemble of multiple classifiers from different downsampled training sets achieves slightly poorer performance, but has the potential to reduce computational effort. However, when performance is measured using receiver operating characteristics (ROC) or area under the curve (AUC), then the sampling approaches outperform the original training set. This observation is important if the 0885-2308/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.csl.2005.06.002 * Corresponding author. Tel.: +1 510 666 2993; fax: +510 666 2956. E-mail addresses: yangl@icsi.berkeley.edu (Y. Liu), nchawla@cse.nd.edu (N.V. Chawla), harper@ecn.purdue.edu (M.P. Harper), ees@speech.sri.com (E. Shriberg), stolcke@speech.sri.com (A. Stolcke). Y. Liu et al. / Computer Speech and Language 20 (2006) 468–494 469 sentence boundary detection output is used by downstream language processing modules. Bagging was found to significantly improve system performance for each of the sampling methods. The gain from these methods may be diminished when the prosody model is combined with the language model, which is a strong knowledge source for the sentence detection task. The patterns found in the pilot study were replicated in the full NIST evaluation task. The conclusions may be dependent on the task, the classifiers, and the knowledge combination approach. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e58e294dbacf605e40ff2f59cc4f8a6a",
"text": "There are fundamental similarities between sleep in mammals and quiescence in the arthropod Drosophila melanogaster, suggesting that sleep-like states are evolutionarily ancient. The nematode Caenorhabditis elegans also has a quiescent behavioural state during a period called lethargus, which occurs before each of the four moults. Like sleep, lethargus maintains a constant temporal relationship with the expression of the C. elegans Period homologue LIN-42 (ref. 5). Here we show that quiescence associated with lethargus has the additional sleep-like properties of reversibility, reduced responsiveness and homeostasis. We identify the cGMP-dependent protein kinase (PKG) gene egl-4 as a regulator of sleep-like behaviour, and show that egl-4 functions in sensory neurons to promote the C. elegans sleep-like state. Conserved effects on sleep-like behaviour of homologous genes in C. elegans and Drosophila suggest a common genetic regulation of sleep-like states in arthropods and nematodes. Our results indicate that C. elegans is a suitable model system for the study of sleep regulation. The association of this C. elegans sleep-like state with developmental changes that occur with larval moults suggests that sleep may have evolved to allow for developmental changes.",
"title": ""
},
{
"docid": "01b05ea8fcca216e64905da7b5508dea",
"text": "Generative Adversarial Networks (GANs) have recently emerged as powerful generative models. GANs are trained by an adversarial process between a generative network and a discriminative network. It is theoretically guaranteed that, in the nonparametric regime, by arriving at the unique saddle point of a minimax objective function, the generative network generates samples from the data distribution. However, in practice, getting close to this saddle point has proven to be difficult, resulting in the ubiquitous problem of “mode collapse”. The root of the problems in training GANs lies on the unbalanced nature of the game being played. Here, we propose to level the playing field and make the minimax game balanced by “heating” the data distribution. The empirical distribution is frozen at temperature zero; GANs are instead initialized at infinite temperature, where learning is stable. By annealing the heated data distribution, we initialized the network at each temperature with the learnt parameters of the previous higher temperature. We posited a conjecture that learning under continuous annealing in the nonparametric regime is stable, and proposed an algorithm in corollary. In our experiments, the annealed GAN algorithm, dubbed β-GAN, trained with unmodified objective function was stable and did not suffer from mode collapse.",
"title": ""
},
{
"docid": "852ff3b52b4bf8509025cb5cb751899f",
"text": "Digital images are ubiquitous in our modern lives, with uses ranging from social media to news, and even scientific papers. For this reason, it is crucial evaluate how accurate people are when performing the task of identify doctored images. In this paper, we performed an extensive user study evaluating subjects capacity to detect fake images. After observing an image, users have been asked if it had been altered or not. If the user answered the image has been altered, he had to provide evidence in the form of a click on the image. We collected 17,208 individual answers from 383 users, using 177 images selected from public forensic databases. Different from other previously studies, our method propose different ways to avoid lucky guess when evaluating users answers. Our results indicate that people show inaccurate skills at differentiating between altered and non-altered images, with an accuracy of 58%, and only identifying the modified images 46.5% of the time. We also track user features such as age, answering time, confidence, providing deep analysis of how such variables influence on the users’ performance.",
"title": ""
},
{
"docid": "c3be24db41e57658793281a9765635c0",
"text": "A boundary element method (BEM) simulation is used to compare the efficiency of numerical inverse Laplace transform strategies, considering general requirements of Laplace-space numerical approaches. The two-dimensional BEM solution is used to solve the Laplace-transformed diffusion equation, producing a time-domain solution after a numerical Laplace transform inversion. Motivated by the needs of numerical methods posed in Laplace-transformed space, we compare five inverse Laplace transform algorithms and discuss implementation techniques to minimize the number of Laplace-space function evaluations. We investigate the ability to calculate a sequence of time domain values using the fewest Laplace-space model evaluations. We find Fourier-series based inversion algorithms work for common time behaviors, are the most robust with respect to free parameters, and allow for straightforward image function evaluation re-use across at least a log cycle of time.",
"title": ""
},
{
"docid": "5594475c91355d113e0045043eff8b93",
"text": "Background: Since the introduction of the systematic review process to Software Engineering in 2004, researchers have investigated a number of ways to mitigate the amount of effort and time taken to filter through large volumes of literature.\n Aim: This study aims to provide a critical analysis of text mining techniques used to support the citation screening stage of the systematic review process.\n Method: We critically re-reviewed papers included in a previous systematic review which addressed the use of text mining methods to support the screening of papers for inclusion in a review. The previous review did not provide a detailed analysis of the text mining methods used. We focus on the availability in the papers of information about the text mining methods employed, including the description and explanation of the methods, parameter settings, assessment of the appropriateness of their application given the size and dimensionality of the data used, performance on training, testing and validation data sets, and further information that may support the reproducibility of the included studies.\n Results: Support Vector Machines (SVM), Naïve Bayes (NB) and Committee of classifiers (Ensemble) are the most used classification algorithms. In all of the studies, features were represented with Bag-of-Words (BOW) using both binary features (28%) and term frequency (66%). Five studies experimented with n-grams with n between 2 and 4, but mostly the unigram was used. χ2, information gain and tf-idf were the most commonly used feature selection techniques. Feature extraction was rarely used although LDA and topic modelling were used. Recall, precision, F and AUC were the most used metrics and cross validation was also well used. More than half of the studies used a corpus size of below 1,000 documents for their experiments while corpus size for around 80% of the studies was 3,000 or fewer documents. The major common ground we found for comparing performance assessment based on independent replication of studies was the use of the same dataset but a sound performance comparison could not be established because the studies had little else in common. In most of the studies, insufficient information was reported to enable independent replication. The studies analysed generally did not include any discussion of the statistical appropriateness of the text mining method that they applied. In the case of applications of SVM, none of the studies report the number of support vectors that they found to indicate the complexity of the prediction engine that they use, making it impossible to judge the extent to which over-fitting might account for the good performance results.\n Conclusions: There is yet to be concrete evidence about the effectiveness of text mining algorithms regarding their use in the automation of citation screening in systematic reviews. The studies indicate that options are still being explored, but there is a need for better reporting as well as more explicit process details and access to datasets to facilitate study replication for evidence strengthening. In general, the reader often gets the impression that text mining algorithms were applied as magic tools in the reviewed papers, relying on default settings or default optimization of available machine learning toolboxes without an in-depth understanding of the statistical validity and appropriateness of such tools for text mining purposes.",
"title": ""
},
{
"docid": "a2dfa8007b3a13da31a768fe07393d15",
"text": "Predicting the time and effort for a software problem has long been a difficult task. We present an approach that automatically predicts the fixing effort, i.e., the person-hours spent on fixing an issue. Our technique leverages existing issue tracking systems: given a new issue report, we use the Lucene framework to search for similar, earlier reports and use their average time as a prediction. Our approach thus allows for early effort estimation, helping in assigning issues and scheduling stable releases. We evaluated our approach using effort data from the JBoss project. Given a sufficient number of issues reports, our automatic predictions are close to the actual effort; for issues that are bugs, we are off by only one hour, beating na¨ýve predictions by a factor of four.",
"title": ""
},
{
"docid": "08025e6ed1ee71596bdc087bfd646eac",
"text": "A method is presented for computing an orthonormal set of eigenvectors for the discrete Fourier transform (DFT). The technique is based on a detailed analysis of the eigenstructure of a special matrix which commutes with the DFT. It is also shown how fractional powers of the DFT can be efficiently computed, and possible applications to multiplexing and transform coding are suggested. T",
"title": ""
},
{
"docid": "3cfa45816c57cbbe1d86f7cce7f52967",
"text": "Video games have become one of the favorite activities of American children. A growing body of research is linking violent video game play to aggressive cognitions, attitudes, and behaviors. The first goal of this study was to document the video games habits of adolescents and the level of parental monitoring of adolescent video game use. The second goal was to examine associations among violent video game exposure, hostility, arguments with teachers, school grades, and physical fights. In addition, path analyses were conducted to test mediational pathways from video game habits to outcomes. Six hundred and seven 8th- and 9th-grade students from four schools participated. Adolescents who expose themselves to greater amounts of video game violence were more hostile, reported getting into arguments with teachers more frequently, were more likely to be involved in physical fights, and performed more poorly in school. Mediational pathways were found such that hostility mediated the relationship between violent video game exposure and outcomes. Results are interpreted within and support the framework of the General Aggression Model.",
"title": ""
},
{
"docid": "2c8bfb9be08edfdac6d335bdcffe204c",
"text": "Undoubtedly, the age of big data has opened new options for natural disaster management, primarily because of the varied possibilities it provides in visualizing, analyzing, and predicting natural disasters. From this perspective, big data has radically changed the ways through which human societies adopt natural disaster management strategies to reduce human suffering and economic losses. In a world that is now heavily dependent on information technology, the prime objective of computer experts and policy makers is to make the best of big data by sourcing information from varied formats and storing it in ways that it can be effectively used during different stages of natural disaster management. This paper aimed at making a systematic review of the literature in analyzing the role of big data in natural disaster management and highlighting the present status of the technology in providing meaningful and effective solutions in natural disaster management. The paper has presented the findings of several researchers on varied scientific and technological perspectives that have a bearing on the efficacy of big data in facilitating natural disaster management. In this context, this paper reviews the major big data sources, the associated achievements in different disaster management phases, and emerging technological topics associated with leveraging this new ecosystem of Big Data to monitor and detect natural hazards, mitigate their effects, assist in relief efforts, and contribute to the recovery and reconstruction processes.",
"title": ""
}
] | scidocsrr |
2fa853ae293bf05da80dc239e01616d1 | A Hybrid Generative/Discriminative Approach to Semi-Supervised Classifier Design | [
{
"docid": "3ac2f2916614a4e8f6afa1c31d9f704d",
"text": "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.",
"title": ""
},
{
"docid": "c698f7d6b487cc7c87d7ff215d7f12b2",
"text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).",
"title": ""
},
{
"docid": "70e6148316bd8915afd8d0908fb5ab0d",
"text": "We consider the problem of using a large unla beled sample to boost performance of a learn ing algorithm when only a small set of labeled examples is available In particular we con sider a problem setting motivated by the task of learning to classify web pages in which the description of each example can be partitioned into two distinct views For example the de scription of a web page can be partitioned into the words occurring on that page and the words occurring in hyperlinks that point to that page We assume that either view of the example would be su cient for learning if we had enough labeled data but our goal is to use both views together to allow inexpensive unlabeled data to augment a much smaller set of labeled ex amples Speci cally the presence of two dis tinct views of each example suggests strategies in which two learning algorithms are trained separately on each view and then each algo rithm s predictions on new unlabeled exam ples are used to enlarge the training set of the other Our goal in this paper is to provide a PAC style analysis for this setting and more broadly a PAC style framework for the general problem of learning from both labeled and un labeled data We also provide empirical results on real web page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice This paper is to appear in the Proceedings of the Conference on Computational Learning Theory This research was supported in part by the DARPA HPKB program under contract F and by NSF National Young Investigator grant CCR INTRODUCTION In many machine learning settings unlabeled examples are signi cantly easier to come by than labeled ones One example of this is web page classi cation Suppose that we want a program to electronically visit some web site and download all the web pages of interest to us such as all the CS faculty member pages or all the course home pages at some university To train such a system to automatically classify web pages one would typically rely on hand labeled web pages These labeled examples are fairly expensive to obtain because they require human e ort In contrast the web has hundreds of millions of unlabeled web pages that can be inexpensively gathered using a web crawler Therefore we would like our learning algorithm to be able to take as much advantage of the unlabeled data as possible This web page learning problem has an interesting feature Each example in this domain can naturally be described using several di erent kinds of information One kind of information about a web page is the text appearing on the document itself A second kind of information is the anchor text attached to hyperlinks pointing to this page from other pages on the web The two problem characteristics mentioned above availability of both labeled and unlabeled data and the availability of two di erent kinds of information about examples suggest the following learning strat egy Using an initial small set of labeled examples nd weak predictors based on each kind of information for instance we might nd that the phrase research inter ests on a web page is a weak indicator that the page is a faculty home page and we might nd that the phrase my advisor on a link is an indicator that the page being pointed to is a faculty page Then attempt to bootstrap from these weak predictors using unlabeled data For instance we could search for pages pointed to with links having the phrase my advisor and use them as probably positive examples to further train a learning algorithm based on the words on the text page and vice versa We call this type of bootstrapping co training and it has a close connection to bootstrapping from incomplete data in the Expectation Maximization setting see for instance The question this raises is is there any reason to believe co training will help Our goal is to address this question by developing a PAC style theoretical framework to better understand the issues involved in this approach We also give some preliminary empirical results on classifying university web pages see Section that are encouraging in this context More broadly the general question of how unlabeled examples can be used to augment labeled data seems a slippery one from the point of view of standard PAC as sumptions We address this issue by proposing a notion of compatibility between a data distribution and a target function Section and discuss how this relates to other approaches to combining labeled and unlabeled data Section",
"title": ""
}
] | [
{
"docid": "d029ce85b17e37abc93ab704fbef3a98",
"text": "Video super-resolution (SR) aims to generate a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The generation of accurate correspondence plays a significant role in video SR. It is demonstrated by traditional video SR methods that simultaneous SR of both images and optical flows can provide accurate correspondences and better SR results. However, LR optical flows are used in existing deep learning based methods for correspondence generation. In this paper, we propose an endto-end trainable video SR framework to super-resolve both images and optical flows. Specifically, we first propose an optical flow reconstruction network (OFRnet) to infer HR optical flows in a coarse-to-fine manner. Then, motion compensation is performed according to the HR optical flows. Finally, compensated LR inputs are fed to a superresolution network (SRnet) to generate the SR results. Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance. Comparative results on the Vid4 and DAVIS10 datasets show that our framework achieves the stateof-the-art performance. The codes will be released soon at: https://github.com/LongguangWang/SOF-VSR-SuperResolving-Optical-Flow-for-Video-Super-Resolution-.",
"title": ""
},
{
"docid": "0de75995e7face03c56ce90aae7bf944",
"text": "The analysis of facial appearance is significant to an early diagnosis of medical genetic diseases. The fast development of image processing and machine learning techniques facilitates the detection of facial dysmorphic features. This paper is a survey of the recent studies developed for the screening of genetic abnormalities across the facial features obtained from two dimensional and three dimensional images.",
"title": ""
},
{
"docid": "ef9947c8f478d6274fcbcf8c9e300806",
"text": "The introduction in 1998 of multi-detector row computed tomography (CT) by the major CT vendors was a milestone with regard to increased scan speed, improved z-axis spatial resolution, and better utilization of the available x-ray power. In this review, the general technical principles of multi-detector row CT are reviewed as they apply to the established four- and eight-section systems, the most recent 16-section scanners, and future generations of multi-detector row CT systems. Clinical examples are used to demonstrate both the potential and the limitations of the different scanner types. When necessary, standard single-section CT is referred to as a common basis and starting point for further developments. Another focus is the increasingly important topic of patient radiation exposure, successful dose management, and strategies for dose reduction. Finally, the evolutionary steps from traditional single-section spiral image-reconstruction algorithms to the most recent approaches toward multisection spiral reconstruction are traced.",
"title": ""
},
{
"docid": "f65d5366115da23c8acd5bce1f4a9887",
"text": "Effective crisis management has long relied on both the formal and informal response communities. Social media platforms such as Twitter increase the participation of the informal response community in crisis response. Yet, challenges remain in realizing the formal and informal response communities as a cooperative work system. We demonstrate a supportive technology that recognizes the existing capabilities of the informal response community to identify needs (seeker behavior) and provide resources (supplier behavior), using their own terminology. To facilitate awareness and the articulation of work in the formal response community, we present a technology that can bridge the differences in terminology and understanding of the task between the formal and informal response communities. This technology includes our previous work using domain-independent features of conversation to identify indications of coordination within the informal response community. In addition, it includes a domain-dependent analysis of message content (drawing from the ontology of the formal response community and patterns of language usage concerning the transfer of property) to annotate social media messages. The resulting repository of annotated messages is accessible through our social media analysis tool, Twitris. It allows recipients in the formal response community to sort on resource needs and availability along various dimensions including geography and time. Thus, computation indexes the original social media content and enables complex querying to identify contents, players, and locations. Evaluation of the computed annotations for seeker-supplier behavior with human judgment shows fair to moderate agreement. In addition to the potential benefits to the formal emergency response community regarding awareness of the observations and activities of the informal response community, the analysis serves as a point of reference for evaluating more computationally intensive efforts and characterizing the patterns of language behavior during a crisis.",
"title": ""
},
{
"docid": "b1f0b80c51af4c146495eb2b1e3b9ba9",
"text": "This paper presents an average current mode buck dimmable light-emitting diode (LED) driver for large-scale single-string LED backlighting applications. The proposed integrated current control technique can provide exact current control signals by using an autozeroed integrator to enhance the accuracy of the average current of LEDs while driving a large number of LEDs. Adoption of discontinuous low-side current sensing leads to power loss reduction. Adoption of a fast-settling technique allows the LED driver to enter into the steady state within three switching cycles after the dimming signal is triggered. Implemented in a 0.35-μm HV CMOS process, the proposed LED driver achieves 1.7% LED current error and 98.16% peak efficiency over an input voltage range of 110 to 200 V while driving 30 to 50 LEDs.",
"title": ""
},
{
"docid": "6d925c32d3900512e0fd0ed36b683c69",
"text": "This paper presents a detailed design process of an ultra-high speed, switched reluctance machine for micro machining. The performance goal of the machine is to reach a maximum rotation speed of 750,000 rpm with an output power of 100 W. The design of the rotor involves reducing aerodynamic drag, avoiding mechanical resonance, and mitigating excessive stress. The design of the stator focuses on meeting the torque requirement while minimizing core loss and copper loss. The performance of the machine and the strength of the rotor structure are both verified through finite-element simulations The final design is a 6/4 switched reluctance machine with a 6mm diameter rotor that is wrapped in a carbon fiber sleeve and exhibits 13.6 W of viscous loss. The stator has shoeless poles and exhibits 19.1 W of electromagnetic loss.",
"title": ""
},
{
"docid": "4d791fa53f7ed8660df26cd4dbe9063a",
"text": "The Internet is a powerful political instrument, wh ich is increasingly employed by terrorists to forward their goals. The fiv most prominent contemporary terrorist uses of the Net are information provision , fi ancing, networking, recruitment, and information gathering. This article describes a nd explains each of these uses and follows up with examples. The final section of the paper describes the responses of government, law enforcement, intelligence agencies, and others to the terrorism-Internet nexus. There is a particular emphasis within the te xt on the UK experience, although examples from other jurisdictions are also employed . ___________________________________________________________________ “Terrorists use the Internet just like everybody el se” Richard Clarke (2004) 1 ___________________________________________________________________",
"title": ""
},
{
"docid": "00ea9078f610b14ed0ed00ed6d0455a7",
"text": "Boosting is a general method for improving the performance of learning algorithms. A recently proposed boosting algorithm, Ada Boost, has been applied with great success to several benchmark machine learning problems using mainly decision trees as base classifiers. In this article we investigate whether Ada Boost also works as well with neural networks, and we discuss the advantages and drawbacks of different versions of the Ada Boost algorithm. In particular, we compare training methods based on sampling the training set and weighting the cost function. The results suggest that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost. This is in contrast to bagging, which directly aims at reducing variance and for which random resampling is essential to obtain the reduction in generalization error. Our system achieves about 1.4 error on a data set of on-line handwritten digits from more than 200 writers. A boosted multilayer network achieved 1.5 error on the UCI letters and 8.1 error on the UCI satellite data set, which is significantly better than boosted decision trees.",
"title": ""
},
{
"docid": "e8d2fc861fd1b930e65d40f6ce763672",
"text": "Despite that burnout presents a serious burden for modern society, there are no diagnostic criteria. Additional difficulty is the differential diagnosis with depression. Consequently, there is a need to dispose of a burnout biomarker. Epigenetic studies suggest that DNA methylation is a possible mediator linking individual response to stress and psychopathology and could be considered as a potential biomarker of stress-related mental disorders. Thus, the aim of this review is to provide an overview of DNA methylation mechanisms in stress, burnout and depression. In addition to state-of-the-art overview, the goal of this review is to provide a scientific base for burnout biomarker research. We performed a systematic literature search and identified 25 pertinent articles. Among these, 15 focused on depression, 7 on chronic stress and only 3 on work stress/burnout. Three epigenome-wide studies were identified and the majority of studies used the candidate-gene approach, assessing 12 different genes. The glucocorticoid receptor gene (NR3C1) displayed different methylation patterns in chronic stress and depression. The serotonin transporter gene (SLC6A4) methylation was similarly affected in stress, depression and burnout. Work-related stress and depressive symptoms were associated with different methylation patterns of the brain derived neurotrophic factor gene (BDNF) in the same human sample. The tyrosine hydroxylase (TH) methylation was correlated with work stress in a single study. Additional, thoroughly designed longitudinal studies are necessary for revealing the cause-effect relationship of work stress, epigenetics and burnout, including its overlap with depression.",
"title": ""
},
{
"docid": "7c295cb178e58298b1f60f5a829118fd",
"text": "A dual-band 0.92/2.45 GHz circularly-polarized (CP) unidirectional antenna using the wideband dual-feed network, two orthogonally positioned asymmetric H-shape slots, and two stacked concentric annular-ring patches is proposed for RF identification (RFID) applications. The measurement result shows that the antenna achieves the impedance bandwidths of 15.4% and 41.9%, the 3-dB axial-ratio (AR) bandwidths of 4.3% and 21.5%, and peak gains of 7.2 dBic and 8.2 dBic at 0.92 and 2.45 GHz bands, respectively. Moreover, the antenna provides stable symmetrical radiation patterns and wide-angle 3-dB AR beamwidths in both lower and higher bands for unidirectional wide-coverage RFID reader applications. Above all, the dual-band CP unidirectional patch antenna presented is beneficial to dual-band RFID system on configuration, implementation, as well as cost reduction.",
"title": ""
},
{
"docid": "627f3b4ae9df80bdc0374d4fe375f40e",
"text": "Though in the lowest level cuckoos exploit precisely this hypothesis that do. This by people who are relevantly similar others suggest the human moral. 1983 levine et al but evolutionary mechanism. No need for a quite generously supported by an idea that do. Individuals may be regulated by the, future nonetheless. It is that we can do not merely apparent case if there are less. 2005 oliner sorokin taylor et al. Oliner however a poet and wrong boehm tackles the motives. Studies have the willingness to others from probability of narrative.",
"title": ""
},
{
"docid": "a45c93e89cc3df3ebec59eb0c81192ec",
"text": "We study a variant of the capacitated vehicle routing problem where the cost over each arc is defined as the product of the arc length and the weight of the vehicle when it traverses that arc. We propose two new mixed integer linear programming formulations for the problem: an arc-load formulation and a set partitioning formulation based on q-routes with additional constraints. A family of cycle elimination constraints are derived for the arc-load formulation. We then compare the linear programming (LP) relaxations of these formulations with the twoindex one-commodity flow formulation proposed in the literature. In particular, we show that the arc-load formulation with the new cycle elimination constraints gives the same LP bound as the set partitioning formulation based on 2-cycle-free q-routes, which is stronger than the LP bound given by the two-index one-commodity flow formulation. We propose a branchand-cut algorithm for the arc-load formulation, and a branch-cut-and-price algorithm for the set partitioning formulation strengthened by additional constraints. Computational results on instances from the literature demonstrate that a significant improvement can be achieved by the branch-cut-and-price algorithm over other methods.",
"title": ""
},
{
"docid": "e1050f3c38f0b49893da4dd7722aff71",
"text": "The Berkeley lower extremity exoskeleton (BLEEX) is a load-carrying and energetically autonomous human exoskeleton that, in this first generation prototype, carries up to a 34 kg (75 Ib) payload for the pilot and allows the pilot to walk at up to 1.3 m/s (2.9 mph). This article focuses on the human-in-the-loop control scheme and the novel ring-based networked control architecture (ExoNET) that together enable BLEEX to support payload while safely moving in concert with the human pilot. The BLEEX sensitivity amplification control algorithm proposed here increases the closed loop system sensitivity to its wearer's forces and torques without any measurement from the wearer (such as force, position, or electromyogram signal). The tradeoffs between not having sensors to measure human variables, the need for dynamic model accuracy, and robustness to parameter uncertainty are described. ExoNET provides the physical network on which the BLEEX control algorithm runs. The ExoNET control network guarantees strict determinism, optimized data transfer for small data sizes, and flexibility in configuration. Its features and application on BLEEX are described",
"title": ""
},
{
"docid": "dac17254c16068a4dcf49e114bfcc822",
"text": "We present a novel coded exposure video technique for multi-image motion deblurring. The key idea of this paper is to capture video frames with a set of complementary fluttering patterns, which enables us to preserve all spectrum bands of a latent image and recover a sharp latent image. To achieve this, we introduce an algorithm for generating a complementary set of binary sequences based on the modern communication theory and implement the coded exposure video system with an off-the-shelf machine vision camera. To demonstrate the effectiveness of our method, we provide in-depth analyses of the theoretical bounds and the spectral gains of our method and other state-of-the-art computational imaging approaches. We further show deblurring results on various challenging examples with quantitative and qualitative comparisons to other computational image capturing methods used for image deblurring, and show how our method can be applied for protecting privacy in videos.",
"title": ""
},
{
"docid": "af910640384bca46ba4268fe4ba0c3b3",
"text": "The experience and methodology developed by COPEL for the integrated use of Pls-Cadd (structure spotting) and Tower (structural analysis) softwares are presented. Structural evaluations in transmission line design are possible for any loading condition, allowing considerations of new or updated loading trees, wind speeds or design criteria.",
"title": ""
},
{
"docid": "79fd1db13ce875945c7e11247eb139c8",
"text": "This paper provides a comprehensive review of outcome studies and meta-analyses of effectiveness studies of psychodynamic therapy (PDT) for the major categories of mental disorders. Comparisons with inactive controls (waitlist, treatment as usual and placebo) generally but by no means invariably show PDT to be effective for depression, some anxiety disorders, eating disorders and somatic disorders. There is little evidence to support its implementation for post-traumatic stress disorder, obsessive-compulsive disorder, bulimia nervosa, cocaine dependence or psychosis. The strongest current evidence base supports relatively long-term psychodynamic treatment of some personality disorders, particularly borderline personality disorder. Comparisons with active treatments rarely identify PDT as superior to control interventions and studies are generally not appropriately designed to provide tests of statistical equivalence. Studies that demonstrate inferiority of PDT to alternatives exist, but are small in number and often questionable in design. Reviews of the field appear to be subject to allegiance effects. The present review recommends abandoning the inherently conservative strategy of comparing heterogeneous \"families\" of therapies for heterogeneous diagnostic groups. Instead, it advocates using the opportunities provided by bioscience and computational psychiatry to creatively explore and assess the value of protocol-directed combinations of specific treatment components to address the key problems of individual patients.",
"title": ""
},
{
"docid": "6d4315ed2e36708528e46b368c89573e",
"text": "Annotating the right data for training deep neural networks is an important challenge. Active learning using uncertainty estimates from Bayesian Neural Networks (BNNs) could provide an effective solution to this. Despite being theoretically principled, BNNs require approximations to be applied to large-scale problems, and have not been used widely by practitioners. In this paper, we introduce Deep Probabilistic Ensembles (DPEs), a scalable technique that uses a regularized ensemble to approximate a deep BNN. We conduct a series of active learning experiments to evaluate DPEs on classification with the CIFAR-10, CIFAR-100 and ImageNet datasets, and semantic segmentation with the BDD100k dataset. Our models consistently outperform baselines and previously published methods, requiring significantly less training data to achieve competitive performances.",
"title": ""
},
{
"docid": "52945fb1d436b81a3e52d83abdea55d0",
"text": "Article history: Received 16 September 2016 Received in revised form 28 November 2016 Accepted 20 January 2017 Available online xxxx",
"title": ""
}
] | scidocsrr |
97a95b08d96e23560c189eb9e2696920 | Missing Modality Transfer Learning via Latent Low-Rank Constraint | [
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "50c3e7855f8a654571a62a094a86c4eb",
"text": "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.",
"title": ""
}
] | [
{
"docid": "a00ac4cefbb432ffcc6535dd8fd56880",
"text": "Mobile activity recognition focuses on inferring current user activities by leveraging sensory data available on today's sensor rich mobile phones. Supervised learning with static models has been applied pervasively for mobile activity recognition. In this paper, we propose a novel phone-based dynamic recognition framework with evolving data streams for activity recognition. The novel framework incorporates incremental and active learning for real-time recognition and adaptation in streaming settings. While stream evolves, we refine, enhance and personalise the learning model in order to accommodate the natural drift in a given data stream. Extensive experimental results using real activity recognition data have evidenced that the novel dynamic approach shows improved performance of recognising activities especially across different users. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e50b074abe37cc8caec8e3922347e0d9",
"text": "Subjectivity and sentiment analysis (SSA) has recently gained considerable attention, but most of the resources and systems built so far are tailored to English and other Indo-European languages. The need for designing systems for other languages is increasing, especially as blogging and micro-blogging websites become popular throughout the world. This paper surveys different techniques for SSA for Arabic. After a brief synopsis about Arabic, we describe the main existing techniques and test corpora for Arabic SSA that have been introduced in the literature.",
"title": ""
},
{
"docid": "fd455a51a5f96251b31db5e6eae34ecc",
"text": "As an infrastructural and productive industry, tourism is very important in modern economy and includes different scopes and functions. If it is developed appropriately, cultural relations and economic development of countries will be extended and provided. Web development as an applied tool in the internet plays a very determining role in tourism success and proper exploitation of it can pave the way for more development and success of this industry. On the other hand, the amount of data in the current world has been increased and analysis of large sets of data that is referred to as big data has been converted into a strategic approach to enhance competition and establish new methods for development, growth, innovation, and enhancement of the number of customers. Today, big data is one of the important issues of information management in digital age and one of the main opportunities in tourism industry for optimal exploitation of maximum information. Big data can shape experiences of smart travel. Remarkable growth of these data sources has inspired new Strategies to understand the socio-economic phenomenon in different fields. The analytical approach of big data emphasizes the capacity of data collection and analysis with an unprecedented extent, depth and scale for solving the problems of real life and uses it. Indeed, big data analyses open the doors to various opportunities for developing the modern knowledge or changing our understanding of this scope and support decision-making in tourism industry. The purpose of this study is to show helpfulness of big data analysis to discover behavioral patterns in tourism industry and propose a model for employing data in tourism.",
"title": ""
},
{
"docid": "93df3ce5213252f8ae7dbd396ebb71bd",
"text": "Role-Based Access Control (RBAC) has been the dominant access control model in industry since the 1990s. It is widely implemented in many applications, including major cloud platforms such as OpenStack, AWS, and Microsoft Azure. However, due to limitations of RBAC, there is a shift towards Attribute-Based Access Control (ABAC) models to enhance flexibility by using attributes beyond roles and groups. In practice, this shift has to be gradual since it is unrealistic for existing systems to abruptly adopt ABAC models, completely eliminating current RBAC implementations.In this paper, we propose an ABAC extension with user attributes for the OpenStack Access Control (OSAC) model and demonstrate its enforcement utilizing the Policy Machine (PM) developed by the National Institute of Standards and Technology. We utilize some of the PM's components along with a proof-of-concept implementation to enforce this ABAC extension for OpenStack, while keeping OpenStack's current RBAC architecture in place. This provides the benefits of enhancing access control flexibility with support of user attributes, while minimizing the overhead of altering the existing OpenStack access control framework. We present use cases to depict added benefits of our model and show enforcement results. We then evaluate the performance of our proposed ABAC extension, and discuss its applicability and possible performance enhancements.",
"title": ""
},
{
"docid": "1430e6cb8a758d97335af0fc337e0c08",
"text": "Low-cost Radio Frequency Identification (RFID) tags affixed to consumer items as smart labels are emerging as one of the most pervasive computing technologies in history. This presents a number of advantages, but also opens a huge number of security problems that need to be addressed before its successful deployment. Many proposals have recently appeared, but all of them are based on RFID tags using classical cryptographic primitives such as Pseudorandom Number Generators (PRNGs), hash functions, or block ciphers. We believe this assumption to be fairly unrealistic, as classical cryptographic constructions lie well beyond the computational reach of very low-cost RFID tags. A new approach is necessary to tackle the problem, so we propose a minimalist lightweight mutual authentication protocol for low-cost RFID tags that offers an adequate security level for certain applications, which could be implemented even in the most limited low-cost tags as it only needs around 300 gates.",
"title": ""
},
{
"docid": "aed009b4d5cbf184f9eb321c9d2d7e5f",
"text": "A novel and simple half ring monopole antenna is presented here. The proposed antenna has been fed by a microstrip line to provide bandwidth supporting ultra wideband (UWB) characteristics. While decreasing the physical size of the antenna, the parameters that affect the performance of the antenna have been investigated here.",
"title": ""
},
{
"docid": "cb2309b5290572cf7211f69cac7b99e8",
"text": "Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial/magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis/angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking",
"title": ""
},
{
"docid": "493c45304bd5b7dd1142ace56e94e421",
"text": "While closed timelike curves (CTCs) are not known to exist, studying their consequences has led to nontrivial insights in general relativity, quantum information, and other areas. In this paper we show that if CTCs existed, then quantum computers would be no more powerful than classical computers: both would have the (extremely large) power of the complexity class PSPACE, consisting of all problems solvable by a conventional computer using a polynomial amount of memory. This solves an open problem proposed by one of us in 2005, and gives an essentially complete understanding of computational complexity in the presence of CTCs. Following the work of Deutsch, we treat a CTC as simply a region of spacetime where a “causal consistency” condition is imposed, meaning that Nature has to produce a (probabilistic or quantum) fixed-point of some evolution operator. Our conclusion is then a consequence of the following theorem: given any quantum circuit (not necessarily unitary), a fixed-point of the circuit can be (implicitly) computed in polynomial space. This theorem might have independent applications in quantum information.",
"title": ""
},
{
"docid": "e75dccbe66ee79c7e1dee67e3df4dc12",
"text": "In recent years, many publications showed that convolutional neural network based features can have a superior performance to engineered features. However, not much effort was taken so far to extract local features efficiently for a whole image. In this paper, we present an approach to compute patch-based local feature descriptors efficiently in presence of pooling and striding layers for whole images at once. Our approach is generic and can be applied to nearly all existing network architectures. This includes networks for all local feature extraction tasks like camera calibration, Patchmatching, optical flow estimation and stereo matching. In addition, our approach can be applied to other patchbased approaches like sliding window object detection and recognition. We complete our paper with a speed benchmark of popular CNN based feature extraction approaches applied on a whole image, with and without our speedup, and example code (for Torch) that shows how an arbitrary CNN architecture can be easily converted by our approach.",
"title": ""
},
{
"docid": "74d2d780291e9dbf2e725b55ccadd278",
"text": "Organizational climate and organizational culture theory and research are reviewed. The article is first framed with definitions of the constructs, and preliminary thoughts on their interrelationships are noted. Organizational climate is briefly defined as the meanings people attach to interrelated bundles of experiences they have at work. Organizational culture is briefly defined as the basic assumptions about the world and the values that guide life in organizations. A brief history of climate research is presented, followed by the major accomplishments in research on the topic with regard to levels issues, the foci of climate research, and studies of climate strength. A brief overview of the more recent study of organizational culture is then introduced, followed by samples of important thinking and research on the roles of leadership and national culture in understanding organizational culture and performance and culture as a moderator variable in research in organizational behavior. The final section of the article proposes an integration of climate and culture thinking and research and concludes with practical implications for the management of effective contemporary organizations. Throughout, recommendations are made for additional thinking and research.",
"title": ""
},
{
"docid": "53ada9fce2d0af2208c4c312870a2912",
"text": "This paper describes a CMOS capacitive sensing amplifier for a monolithic MEMS accelerometer fabricated by post-CMOS surface micromachining. This chopper stabilized amplifier employs capacitance matching with optimal transistor sizing to minimize sensor noise floor. Offsets due to sensor and circuit are reduced by ac offset calibration and dc offset cancellation based on a differential difference amplifier (DDA). Low-duty-cycle periodic reset is used to establish robust dc bias at the sensing electrodes with low noise. This work shows that continuous-time voltage sensing can achieve lower noise than switched-capacitor charge integration for sensing ultra-small capacitance changes. A prototype accelerometer integrated with this circuit achieves 50g Hz acceleration noise floor and 0.02-aF Hz capacitance noise floor while chopped at 1 MHz.",
"title": ""
},
{
"docid": "a4d3cebea4be0bbb7890c033e7f252c1",
"text": "In this paper, we investigate continuum manipulators that are analogous to conventional rigid-link parallel robot designs. These “parallel continuum manipulators” have the potential to inherit some of the compactness and compliance of continuum robots while retaining some of the precision, stability, and strength of rigid-link parallel robots, yet they represent a relatively unexplored area of the broad manipulator design space. We describe the construction of a prototype manipulator structure with six compliant legs connected in a parallel pattern similar to that of a Stewart-Gough platform. We formulate the static forward and inverse kinematics problems for such manipulators as the solution to multiple Cosserat-rod models with coupled boundary conditions, and we test the accuracy of this approach in a set of experiments, including the prediction of leg buckling. An inverse kinematics simulation of slices through the 6 degree-of-freedom (DOF) workspace illustrates the kinematic mapping, range of motion, and force required for actuation, which sheds light on the potential advantages and tradeoffs that parallel continuum manipulators may bring. Potential applications include miniature wrists and arms for endoscopic medical procedures, and lightweight compliant arms for safe interaction with humans.",
"title": ""
},
{
"docid": "5bb15e64e7e32f3a0b1b99be8b8ab2bf",
"text": "Breast cancer is one of the major causes of death in women when compared to all other cancers. Breast cancer has become the most hazardous types of cancer among women in the world. Early detection of breast cancer is essential in reducing life losses. This paper presents a comparison among the different Data mining classifiers on the database of breast cancer Wisconsin Breast Cancer (WBC), by using classification accuracy. This paper aims to establish an accurate classification model for Breast cancer prediction, in order to make full use of the invaluable information in clinical data, especially which is usually ignored by most of the existing methods when they aim for high prediction accuracies. We have done experiments on WBC data. The dataset is divided into training set with 499 and test set with 200 patients. In this experiment, we compare six classification techniques in Weka software and comparison results show that Support Vector Machine (SVM) has higher prediction accuracy than those methods. Different methods for breast cancer detection are explored and their accuracies are compared. With these results, we infer that the SVM are more suitable in handling the classification problem of breast cancer prediction, and we recommend the use of these approaches in similar classification problems. Keywords—breast cancer; classification; Decision tree, Naïve Bayes, MLP, Logistic Regression SVM, KNN and weka;",
"title": ""
},
{
"docid": "b5b5e87aa833cdabd52f9072296c49f8",
"text": "In the modern e-commerce, the behaviors of customers contain rich information, e.g., consumption habits, the dynamics of preferences. Recently, session-based recommendationsare becoming popular to explore the temporal characteristics of customers' interactive behaviors. However, existing works mainly exploit the short-term behaviors without fully taking the customers' long-term stable preferences and evolutions into account. In this paper, we propose a novel Behavior-Intensive Neural Network (BINN) for next-item recommendation by incorporating both users' historical stable preferences and present consumption motivations. Specifically, BINN contains two main components, i.e., Neural Item Embedding, and Discriminative Behaviors Learning. Firstly, a novel item embedding method based on user interactions is developed for obtaining an unified representation for each item. Then, with the embedded items and the interactive behaviors over item sequences, BINN discriminatively learns the historical preferences and present motivations of the target users. Thus, BINN could better perform recommendations of the next items for the target users. Finally, for evaluating the performances of BINN, we conduct extensive experiments on two real-world datasets, i.e., Tianchi and JD. The experimental results clearly demonstrate the effectiveness of BINN compared with several state-of-the-art methods.",
"title": ""
},
{
"docid": "c0e99b3b346ef219e8898c3608d2664f",
"text": "A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called disocclusion. The general solution is to smooth the depth map using a Gaussian smoothing filter before 3D warping. However, the filtered depth map causes geometric distortion and the depth quality is seriously degraded. Therefore, we propose a new depth map filtering algorithm to solve the disocclusion problem while maintaining the depth quality. In order to preserve the visual quality of the virtual view, we smooth the depth map with further reduced deformation. After extracting object boundaries depending on the position of the virtual view, we apply a discontinuity-adaptive smoothing filter according to the distance of the object boundary and the amount of depth discontinuities. Finally, we obtain the depth map with higher quality compared to other methods. Experimental results showed that the disocclusion is efficiently removed and the visual quality of the virtual view is maintained.",
"title": ""
},
{
"docid": "adb42f43e57458888344dc97bbae9439",
"text": "We present a general picture of the parallel meta-heuristic search for optimization. We recall the main concepts and strategies in designing parallel metaheuristics, pointing to a number of contributions that instantiated them for neighborhoodand populationbased meta-heuristics, and identify trends and promising research directions. We focus on cooperation-based strategies, which display remarkable performances, in particular on asynchronous cooperation and advanced cooperation strategies that create new information out of exchanged data to enhance the global guidance of the search.",
"title": ""
},
{
"docid": "3cd19e73aade3e99fff4b213afd3c678",
"text": "We describe the dialogue model for the virtual humans developed at the Institute for Creative Technologies at the University of Southern California. The dialogue model contains a rich set of information state and dialogue moves to allow a wide range of behaviour in multimodal, multiparty interaction. We extend this model to enable non-team negotiation, using ideas from social science literature on negotiation and implemented strategies and dialogue moves for this area. We present a virtual human doctor who uses this model to engage in multimodal negotiation dialogue with people from other organisations. The doctor is part of the SASO-ST system, used for training for non-team interactions.",
"title": ""
},
{
"docid": "9b2dc34302b69ca863e4bcca26e09c96",
"text": "Two opposing theories have been proposed to explain competitive advantage of firms. First, the market-based view (MBV) is focused on product or market positions and competition while second, the resource-based view (RBV) aims at explaining success by inwardly looking at unique resources and capabilities of a firm. Research has been struggling to distinguish impacts of these theories for illuminating performance. Business models are seen as an important concept to systemize the business and value creation logic of firms by defining different core components. Thus, this paper tries to assess associations between these components and MBV or RBV perspectives by applying content analysis. Two of the business model components were found to have strong links with the MBV while three of them showed indications of their roots lying in the resource-based perspective. These results are discussed and theorized in a final step by suggesting frameworks of the corresponding perspectives for further explaining competitive advantage.",
"title": ""
},
{
"docid": "f74ccd06a302b70980d7b3ba2ee76cfb",
"text": "As the world becomes more connected to the cyber world, attackers and hackers are becoming increasingly sophisticated to penetrate computer systems and networks. Intrusion Detection System (IDS) plays a vital role in defending a network against intrusion. Many commercial IDSs are available in marketplace but with high cost. At the same time open source IDSs are also available with continuous support and upgradation from large user community. Each of these IDSs adopts a different approaches thus may target different applications. This paper provides a quick review of six Open Source IDS tools so that one can choose the appropriate Open Source IDS tool as per their organization requirements.",
"title": ""
}
] | scidocsrr |
d3f47a20ef4feb70db93bfb0c9ca577b | Permacoin: Repurposing Bitcoin Work for Data Preservation | [
{
"docid": "dfb3a6fea5c2b12e7865f8b6664246fb",
"text": "We develop a new version of prospect theory that employs cumulative rather than separable decision weights and extends the theory in several respects. This version, called cumulative prospect theory, applies to uncertain as well as to risky prospects with any number of outcomes, and it allows different weighting functions for gains and for losses. Two principles, diminishing sensitivity and loss aversion, are invoked to explain the characteristic curvature of the value function and the weighting functions. A review of the experimental evidence and the results of a new experiment confirm a distinctive fourfold pattern of risk attitudes: risk aversion for gains and risk seeking for losses of high probability; risk seeking for gains and risk aversion for losses of low probability. Expected utility theory reigned for several decades as the dominant normative and descriptive model of decision making under uncertainty, but it has come under serious question in recent years. There is now general agreement that the theory does not provide an adequate description of individual choice: a substantial body of evidence shows that decision makers systematically violate its basic tenets. Many alternative models have been proposed in response to this empirical challenge (for reviews, see Camerer, 1989; Fishburn, 1988; Machina, 1987). Some time ago we presented a model of choice, called prospect theory, which explained the major violations of expected utility theory in choices between risky prospects with a small number of outcomes (Kahneman and Tversky, 1979; Tversky and Kahneman, 1986). The key elements of this theory are 1) a value function that is concave for gains, convex for losses, and steeper for losses than for gains, *An earlier version of this article was entitled \"Cumulative Prospect Theory: An Analysis of Decision under Uncertainty.\" This article has benefited from discussions with Colin Camerer, Chew Soo-Hong, David Freedman, and David H. Krantz. We are especially grateful to Peter P. Wakker for his invaluable input and contribution to the axiomatic analysis. We are indebted to Richard Gonzalez and Amy Hayes for running the experiment and analyzing the data. This work was supported by Grants 89-0064 and 88-0206 from the Air Force Office of Scientific Research, by Grant SES-9109535 from the National Science Foundation, and by the Sloan Foundation. 298 AMOS TVERSKY/DANIEL KAHNEMAN and 2) a nonlinear transformation of the probability scale, which overweights small probabilities and underweights moderate and high probabilities. In an important later development, several authors (Quiggin, 1982; Schmeidler, 1989; Yaari, 1987; Weymark, 1981) have advanced a new representation, called the rank-dependent or the cumulative functional, that transforms cumulative rather than individual probabilities. This article presents a new version of prospect theory that incorporates the cumulative functional and extends the theory to uncertain as well to risky prospects with any number of outcomes. The resulting model, called cumulative prospect theory, combines some of the attractive features of both developments (see also Luce and Fishburn, 1991). It gives rise to different evaluations of gains and losses, which are not distinguished in the standard cumulative model, and it provides a unified treatment of both risk and uncertainty. To set the stage for the present development, we first list five major phenomena of choice, which violate the standard model and set a minimal challenge that must be met by any adequate descriptive theory of choice. All these findings have been confirmed in a number of experiments, with both real and hypothetical payoffs. Framing effects. The rational theory of choice assumes description invariance: equivalent formulations of a choice problem should give rise to the same preference order (Arrow, 1982). Contrary to this assumption, there is much evidence that variations in the framing of options (e.g., in terms of gains or losses) yield systematically different preferences (Tversky and Kahneman, 1986). Nonlinear preferences. According to the expectation principle, the utility of a risky prospect is linear in outcome probabilities. Allais's (1953) famous example challenged this principle by showing that the difference between probabilities of .99 and 1.00 has more impact on preferences than the difference between 0.10 and 0.11. More recent studies observed nonlinear preferences in choices that do not involve sure things (Camerer and Ho, 1991). Source dependence. People's willingness to bet on an uncertain event depends not only on the degree of uncertainty but also on its source. Ellsberg (1961) observed that people prefer to bet on an urn containing equal numbers of red and green balls, rather than on an urn that contains red and green balls in unknown proportions. More recent evidence indicates that people often prefer a bet on an event in their area of competence over a bet on a matched chance event, although the former probability is vague and the latter is clear (Heath and Tversky, 1991). Risk seeking. Risk aversion is generally assumed in economic analyses of decision under uncertainty. However, risk-seeking choices are consistently observed in two classes of decision problems. First, people often prefer a small probability of winning a large prize over the expected value of that prospect. Second, risk seeking is prevalent when people must choose between a sure loss and a substantial probability of a larger loss. Loss' aversion. One of the basic phenomena of choice under both risk and uncertainty is that losses loom larger than gains (Kahneman and Tversky, 1984; Tversky and Kahneman, 1991). The observed asymmetry between gains and losses is far too extreme to be explained by income effects or by decreasing risk aversion. ADVANCES IN PROSPECT THEORY 299 The present development explains loss aversion, risk seeking, and nonlinear preferences in terms of the value and the weighting functions. It incorporates a framing process, and it can accommodate source preferences. Additional phenomena that lie beyond the scope of the theory--and of its alternatives--are discussed later. The present article is organized as follows. Section 1.1 introduces the (two-part) cumulative functional; section 1.2 discusses relations to previous work; and section 1.3 describes the qualitative properties of the value and the weighting functions. These properties are tested in an extensive study of individual choice, described in section 2, which also addresses the question of monetary incentives. Implications and limitations of the theory are discussed in section 3. An axiomatic analysis of cumulative prospect theory is presented in the appendix.",
"title": ""
}
] | [
{
"docid": "b428ee2a14b91fee7bb80058e782774d",
"text": "Recurrent connectionist networks are important because they can perform temporally extended tasks, giving them considerable power beyond the static mappings performed by the now-familiar multilayer feedforward networks. This ability to perform highly nonlinear dynamic mappings makes these networks particularly interesting to study and potentially quite useful in tasks which have an important temporal component not easily handled through the use of simple tapped delay lines. Some examples are tasks involving recognition or generation of sequential patterns and sensorimotor control. This report examines a number of learning procedures for adjusting the weights in recurrent networks in order to train such networks to produce desired temporal behaviors from input-output stream examples. The procedures are all based on the computation of the gradient of performance error with respect to network weights, and a number of strategies for computing the necessary gradient information are described. Included here are approaches which are familiar and have been rst described elsewhere, along with several novel approaches. One particular purpose of this report is to provide uniform and detailed descriptions and derivations of the various techniques in order to emphasize how they relate to one another. Another important contribution of this report is a detailed analysis of the computational requirements of the various approaches discussed.",
"title": ""
},
{
"docid": "19d554b2ef08382418979bf7ceb15baf",
"text": "In this paper, we address the cross-lingual topic modeling, which is an important technique that enables global enterprises to detect and compare topic trends across global markets. Previous works in cross-lingual topic modeling have proposed methods that utilize parallel or comparable corpus in constructing the polylingual topic model. However, parallel or comparable corpus in many cases are not available. In this research, we incorporate techniques of mapping cross-lingual word space and the topic modeling (LDA) and propose two methods: Translated Corpus with LDA (TC-LDA) and Post Match LDA (PM-LDA). The cross-lingual word space mapping allows us to compare words of different languages, and LDA enables us to group words into topics. Both TC-LDA and PM-LDA do not need parallel or comparable corpus and hence have more applicable domains. The effectiveness of both methods is evaluated using UM-Corpus and WS-353. Our evaluation results indicate that both methods are able to identify similar documents written in different language. In addition, PM-LDA is shown to achieve better performance than TC-LDA, especially when document length is short.",
"title": ""
},
{
"docid": "fc289c7a9f08ff3f5dd41ae683ab77b3",
"text": "Approximate Newton methods are standard optimization tools which aim to maintain the benefits of Newton’s method, such as a fast rate of convergence, while alleviating its drawbacks, such as computationally expensive calculation or estimation of the inverse Hessian. In this work we investigate approximate Newton methods for policy optimization in Markov decision processes (MDPs). We first analyse the structure of the Hessian of the total expected reward, which is a standard objective function for MDPs. We show that, like the gradient, the Hessian exhibits useful structure in the context of MDPs and we use this analysis to motivate two Gauss-Newton methods for MDPs. Like the Gauss-Newton method for non-linear least squares, these methods drop certain terms in the Hessian. The approximate Hessians possess desirable properties, such as negative definiteness, and we demonstrate several important performance guarantees including guaranteed ascent directions, invariance to affine transformation of the parameter space and convergence guarantees. We finally provide a unifying perspective of key policy search algorithms, demonstrating that our second Gauss-Newton algorithm is closely related to both the EMalgorithm and natural gradient ascent applied to MDPs, but performs significantly better in practice on a range of challenging domains.",
"title": ""
},
{
"docid": "846931a1e4c594626da26931110c02d6",
"text": "A large volume of research has been conducted in the cognitive radio (CR) area the last decade. However, the deployment of a commercial CR network is yet to emerge. A large portion of the existing literature does not build on real world scenarios, hence, neglecting various important aspects of commercial telecommunication networks. For instance, a lot of attention has been paid to spectrum sensing as the front line functionality that needs to be completed in an efficient and accurate manner to enable an opportunistic CR network architecture. While on the one hand it is necessary to detect the existence of spectrum holes, on the other hand, simply sensing (cooperatively or not) the energy emitted from a primary transmitter cannot enable correct dynamic spectrum access. For example, the presence of a primary transmitter's signal does not mean that CR network users cannot access the spectrum since there might not be any primary receiver in the vicinity. Despite the existing solutions to the DSA problem no robust, implementable scheme has emerged. The set of assumptions that these schemes are built upon do not always hold in realistic, wireless environments. Specific settings are assumed, which differ significantly from how existing telecommunication networks work. In this paper, we challenge the basic premises of the proposed schemes. We further argue that addressing the technical challenges we face in deploying robust CR networks can only be achieved if we radically change the way we design their basic functionalities. In support of our argument, we present a set of real-world scenarios, inspired by realistic settings in commercial telecommunications networks, namely TV and cellular, focusing on spectrum sensing as a basic and critical functionality in the deployment of CRs. We use these scenarios to show why existing DSA paradigms are not amenable to realistic deployment in complex wireless environments. The proposed study extends beyond cognitive radio networks, and further highlights the often existing gap between research and commercialization, paving the way to new thinking about how to accelerate commercialization and adoption of new networking technologies and services.",
"title": ""
},
{
"docid": "ede8a7a2ba75200dce83e17609ec4b5b",
"text": "We present a complimentary objective for training recurrent neural networks (RNN) with gating units that helps with regularization and interpretability of the trained model. Attention-based RNN models have shown success in many difficult sequence to sequence classification problems with long and short term dependencies, however these models are prone to overfitting. In this paper, we describe how to regularize these models through an L1 penalty on the activation of the gating units, and show that this technique reduces overfitting on a variety of tasks while also providing to us a human-interpretable visualization of the inputs used by the network. These tasks include sentiment analysis, paraphrase recognition, and question answering.",
"title": ""
},
{
"docid": "d401630481d725ae3d853b126710da31",
"text": "Combinatory Category Grammar (CCG) supertagging is a task to assign lexical categories to each word in a sentence. Almost all previous methods use fixed context window sizes to encode input tokens. However, it is obvious that different tags usually rely on different context window sizes. This motivates us to build a supertagger with a dynamic window approach, which can be treated as an attention mechanism on the local contexts. We find that applying dropout on the dynamic filters is superior to the regular dropout on word embeddings. We use this approach to demonstrate the state-ofthe-art CCG supertagging performance on the standard test set. Introduction Combinatory Category Grammar (CCG) provides a connection between syntax and semantics of natural language. The syntax can be specified by derivations of the lexicon based on the combinatory rules, and the semantics can be recovered from a set of predicate-argument relations. CCG provides an elegant solution for a wide range of semantic analysis, such as semantic parsing (Zettlemoyer and Collins 2007; Kwiatkowski et al. 2010; 2011; Artzi, Lee, and Zettlemoyer 2015), semantic representations (Bos et al. 2004; Bos 2005; 2008; Lewis and Steedman 2013), and semantic compositions, all of which heavily depend on the supertagging and parsing performance. All these motivate us to build a more accurate CCG supertagger. CCG supertagging is the task to predict the lexical categories for each word in a sentence. Existing algorithms on CCG supertagging range from point estimation (Clark and Curran 2007; Lewis and Steedman 2014) to sequential estimation (Xu, Auli, and Clark 2015; Lewis, Lee, and Zettlemoyer 2016; Vaswani et al. 2016), which predict the most probable supertag of the current word according to the context in a fixed size window. This fixed size window assumption is too strong to generalize. We argue this from two perspectives. One perspective comes from the inputs. For a particular word, the number of its categories may vary from 1 to 130 in CCGBank 02-21 (Hockenmaier and Steedman 2007). We ∗Corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. on a warm autumn day ...",
"title": ""
},
{
"docid": "662781648f5c9bbcb67dfd2b529a1347",
"text": "A compact broadband class-E power amplifier design is presented. High broadband power efficiency is observed from 2.0–2.5 GHz, where drain efficiency ≫74% and PAE ≫71%, when using 2nd-harmonic input tuning. The highest in-band efficiency performance is observed at 2.14 GHz from a 40V supply with peak drain-efficiency of 77.3% and peak PAE of 74.0% at 12W output power and 14dB gain. The best broadband output power performance is observed from 2.1–2.7 GHz without 2nd-harmonic input tuning, where the output power variation is within 1.5dB and power efficiency is between 53% and 66%.",
"title": ""
},
{
"docid": "28a4fd94ba02c70d6781ae38bf35ca5a",
"text": "Zero-shot learning (ZSL) highly depends on a good semantic embedding to connect the seen and unseen classes. Recently, distributed word embeddings (DWE) pre-trained from large text corpus have become a popular choice to draw such a connection. Compared with human defined attributes, DWEs are more scalable and easier to obtain. However, they are designed to reflect semantic similarity rather than visual similarity and thus using them in ZSL often leads to inferior performance. To overcome this visual-semantic discrepancy, this work proposes an objective function to re-align the distributed word embeddings with visual information by learning a neural network to map it into a new representation called visually aligned word embedding (VAWE). Thus the neighbourhood structure of VAWEs becomes similar to that in the visual domain. Note that in this work we do not design a ZSL method that projects the visual features and semantic embeddings onto a shared space but just impose a requirement on the structure of the mapped word embeddings. This strategy allows the learned VAWE to generalize to various ZSL methods and visual features. As evaluated via four state-of-the-art ZSL methods on four benchmark datasets, the VAWE exhibit consistent performance improvement.",
"title": ""
},
{
"docid": "49ca8739b6e28f0988b643fc97e7c6b1",
"text": "Stroke is a leading cause of severe physical disability, causing a range of impairments. Frequently stroke survivors are left with partial paralysis on one side of the body and movement can be severely restricted in the affected side’s hand and arm. We know that effective rehabilitation must be early, intensive and repetitive, which leads to the challenge of how to maintain motivation for people undergoing therapy. This paper discusses why games may be an effective way of addressing the problem of engagement in therapy and analyses which game design patterns may be important for rehabilitation. We present a number of serious games that our group has developed for upper limb rehabilitation. Results of an evaluation of the games are presented which indicate that they may be appropriate for people with stroke.",
"title": ""
},
{
"docid": "efcf84406a2218deeb4ca33cb8574172",
"text": "Cross-site scripting attacks represent one of the major security threats in today’s Web applications. Current approaches to mitigate cross-site scripting vulnerabilities rely on either server-based or client-based defense mechanisms. Although effective for many attacks, server-side protection mechanisms may leave the client vulnerable if the server is not well patched. On the other hand, client-based mechanisms may incur a significant overhead on the client system. In this work, we present a hybrid client-server solution that combines the benefits of both architectures. Our Proxy-based solution leverages the strengths of both anomaly detection and control flow analysis to provide accurate detection. We demonstrate the feasibility and accuracy of our approach through extended testing using real-world cross-site scripting exploits.",
"title": ""
},
{
"docid": "9631926db0052f89abe3b540789ed08e",
"text": "DC/DC converters to power future CPU cores mandate low-voltage power metal-oxide semiconductor field-effect transistors (MOSFETs) with ultra low on-resistance and gate charge. Conventional vertical trench MOSFETs cannot meet the challenge. In this paper, we introduce an alternative device solution, the large-area lateral power MOSFET with a unique metal interconnect scheme and a chip-scale package. We have designed and fabricated a family of lateral power MOSFETs including a sub-10 V class power MOSFET with a record-low R/sub DS(ON)/ of 1m/spl Omega/ at a gate voltage of 6V, approximately 50% of the lowest R/sub DS(ON)/ previously reported. The new device has a total gate charge Q/sub g/ of 22nC at 4.5V and a performance figures of merit of less than 30m/spl Omega/-nC, a 3/spl times/ improvement over the state of the art trench MOSFETs. This new MOSFET was used in a 100-W dc/dc converter as the synchronous rectifiers to achieve a 3.5-MHz pulse-width modulation switching frequency, 97%-99% efficiency, and a power density of 970W/in/sup 3/. The new lateral MOSEFT technology offers a viable solution for the next-generation, multimegahertz, high-density dc/dc converters for future CPU cores and many other high-performance power management applications.",
"title": ""
},
{
"docid": "b5e539774c408232797da1f35abcca90",
"text": "The discrete Laplace-Beltrami operator plays a prominent role in many Digital Geometry Processing applications ranging from denoising to parameterization, editing, and physical simulation. The standard discretization uses the cotangents of the angles in the immersed mesh which leads to a variety of numerical problems. We advocate use of the intrinsic Laplace-Beltrami operator. It satis- fies a local maximum principle, guaranteeing, e.g., that no flipped triangles can occur in parameterizations. It also leads to better conditioned linear systems. The intrinsic Laplace-Beltrami operator is based on an intrinsic Delaunay triangulation of the surface. We give an incremental algorithm to construct such triangulations together with an overlay structure which captures the relationship between the extrinsic and intrinsic triangulations. Using a variety of example meshes we demonstrate the numerical benefits of the intrinsic Laplace-Beltrami operator.",
"title": ""
},
{
"docid": "5b320c270439ec6d2db40a192b899c22",
"text": "This thesis studies methods to solve Visual Question-Answering (VQA) tasks with a Deep Learning framework. As a preliminary step, we explore Long Short-Term Memory (LSTM) networks used in Natural Language Processing (NLP) to tackle Question-Answering (text based). We then modify the previous model to accept an image as an input in addition to the question. For this purpose, we explore the VGG-16 and K-CNN convolutional neural networks to extract visual features from the image. These are merged with the word embedding or with a sentence embedding of the question to predict the answer. This work was successfully submitted to the Visual Question Answering Challenge 2016, where it achieved a 53,62% of accuracy in the test dataset. The developed software has followed the best programming practices and Python code style, providing a consistent baseline in Keras for different configurations. The source code and models are publicly available at https://github.com/imatge-upc/vqa-2016-cvprw.",
"title": ""
},
{
"docid": "36165cb8c6690863ed98c490ba889a9e",
"text": "This paper presents a new low-cost digital control solution that maximizes the AC/DC flyback power supply efficiency. This intelligent digital approach achieves the combined benefits of high performance, low cost and high reliability in a single controller. It introduces unique multiple PWM and PFM operational modes adaptively based on the power supply load changes. While the multi-mode PWM/PFM control significantly improves the light-load efficiency and thus the overall average efficiency, it does not bring compromise to other system performance, such as audible noise, voltage ripples or regulations. It also seamlessly integrated an improved quasi-resonant switching scheme that enables valley-mode turn on in every switching cycle without causing modification to the main PWM/PFM control schemes. A digital integrated circuit (IC) that implements this solution, namely iW1696, has been fabricated and introduced to the industry recently. In addition to outlining the approach, this paper provides experimental results obtained on a 3-W (5V/550mA) cell phone charger that is built with the iW1696.",
"title": ""
},
{
"docid": "a3e7a0cd6c0e79dee289c5b31c3dac76",
"text": "Silicone is one of the most widely used filler for facial cosmetic correction and soft tissue augmentation. Although initially it was considered to be a biologically inert material, many local and generalized adverse effects have been reported after silicone usage for cosmetic purposes. We present a previously healthy woman who developed progressive and persistent generalized livedo reticularis after cosmetic surgery for volume augmentation of buttocks. Histopathologic study demonstrated dermal presence of interstitial vacuoles and cystic spaces of different sizes between the collagen bundles, which corresponded to the silicone particles implanted years ago. These vacuoles were clustered around vascular spaces and surrounded by a few foamy macrophages. General examination and laboratory investigations failed to show any evidence of connective tissue disease or other systemic disorder. Therefore, we believe that the silicone implanted may have induced some kind of blood dermal perturbation resulting in the characteristic violet reticular discoloration of livedo reticularis.",
"title": ""
},
{
"docid": "e039567ec759d38da518c7f5eaba08f8",
"text": "With economic globalization and the rapid development of e-commerce, customer relationship management (CRM) has become the core of growth of the company. Data mining, as a powerful data analysis tool, extracts critical information supporting the company to make better decisions by processing a large number of data in commercial databases. This paper introduced the basic concepts of data mining and CRM, and described the process how to use data mining for CRM. At last, the paper described the applications of several main data mining methods in CRM, such as clustering, classification and association rule.",
"title": ""
},
{
"docid": "bb0ac3d88646bf94710a4452ddf50e51",
"text": "Everyday knowledge about living things, physical objects and the beliefs and desires of other people appears to be organized into sophisticated systems that are often called intuitive theories. Two long term goals for psychological research are to understand how these theories are mentally represented and how they are acquired. We argue that the language of thought hypothesis can help to address both questions. First, compositional languages can capture the content of intuitive theories. Second, any compositional language will generate an account of theory learning which predicts that theories with short descriptions tend to be preferred. We describe a computational framework that captures both ideas, and compare its predictions to behavioral data from a simple theory learning task. Any comprehensive account of human knowledge must acknowledge two principles. First, everyday knowledge is more than a list of isolated facts, and much of it appears to be organized into richly structured systems that are sometimes called intuitive theories. Even young children, for instance, have systematic beliefs about domains including folk physics, folk biology, and folk psychology [10]. Second, some aspects of these theories appear to be learned. Developmental psychologists have explored how intuitive theories emerge over the first decade of life, and at least some of these changes appear to result from learning. Although theory learning raises some challenging problems, two computational principles that may support this ability have been known for many years. First, a theory-learning system must be able to represent the content of any theory that it acquires. A learner that cannot represent a given system of concepts is clearly unable to learn this system from data. Second, there will always be many systems of concepts that are compatible with any given data set, and a learner must rely on some a priori ordering of the set of possible theories to decide which candidate is best [5, 9]. Loosely speaking, this ordering can be identified with a simplicity measure, or a prior distribution over the space of possible theories. There is at least one natural way to connect these two computational principles. Suppose that intuitive theories are represented in a “language of thought:” a language that allows complex concepts to be represented as combinations of simpler concepts [5]. A compositional language provides a straightforward way to construct sophisticated theories, but also provides a natural ordering over the resulting space of theories: the a priori probability of a theory can be identified with its length in this representation language [3, 7]. Combining this prior distribution with an engine for Bayesian inference leads immediately to a computational account of theory learning. There may be other ways to explain how people represent and acquire complex systems of knowledge, but it is striking that the “language of thought” hypothesis can address both questions. This paper describes a computational framework that helps to explain how theories are acquired, and that can be used to evaluate different proposals about the language of thought. Our approach builds on previous discussions of concept learning that have explored the link between compositional representations and inductive inference. Two recent approaches propose that concepts are represented in a form of propositional logic, and that the a priori plausibility of an inductive hypothesis is related to the length of its representation in this language [4, 6]. Our approach is similar in spirit, but is motivated in part by the need for languages richer than propositional logic. The framework we present is extremely general, and is compatible with virtually any representation language, including various forms of predicate logic. Methods for learning theories expressed in predicate logic have previously been explored in the field of Inductive Logic Programming, and we recently proposed a theory-learning model that is inspired by this tradition [7]. Our current approach is motivated by similar goals, but is better able to account for the discovery of abstract theoretical laws. The next section describes our computational framework and introduces the specific logical language that we will consider throughout. Our framework allows relatively sophisticated theories to be represented and learned, but we evaluate it here by applying it to a simple learning problem and comparing its predictions with human inductive inferences. A Bayesian approach to theory discovery Suppose that a learner observes some of the relationships that hold among a fixed, finite set of entities, and wishes to discover a theory that accounts for these data. Suppose, for instance, that the entities are thirteen adults from a remote tribe (a through m), and that the data specify that the spouse relation (S(·, ·)) is true of some pairs (Figure 1). One candidate theory states that S(·, ·) is a symmetric relation, that some of the individuals are male (M(·)), that marriages are permitted only between males and non-males, and that males may take multiple spouses but non-males may have only one spouse (Figure 1b). Other theories are possible, including the theory which states only that S(·, ·) is symmetric. Accounts of theory learning should distinguish between at least three kinds of entities: theories, models, and data. A theory is a set of statements that captures constraints on possible configurations of the world. For instance, the theory in Figure 1b rules out configurations where the spouse relation is asymmetric. A model of a theory specifies the extension",
"title": ""
},
{
"docid": "5068191083a9a14751b88793dd96e7d3",
"text": "The electric motor is the main component in an electrical vehicle. Its power density is directly influenced by the winding. For this reason, it is relevant to investigate the influences of coil production on the quality of the stator. The examined stator in this article is wound with the multi-wire needle winding technique. With this method, the placing of the wires can be precisely guided leading to small winding heads. To gain a high winding quality with small winding resistances, the control of the tensile force during the winding process is essential. The influence of the tensile force on the winding resistance during the winding process with the multiple needle winding technique will be presented here. To control the tensile force during the winding process, the stress on the wire during the winding process needs to be examined first. Thus a model will be presented to investigate the tensile force which realizes a coupling between the multibody dynamics simulation and the finite element methods with the software COMSOL Multiphysics®. With the results of the simulation, a new winding-trajectory based wire tension control can be implemented. Therefore, new strategies to control the tensile force during the process using a CAD/CAM approach will be presented in this paper.",
"title": ""
},
{
"docid": "774938c175781ed644327db1dae9d1d4",
"text": "It is widely accepted that sizing or predicting the volumes of various kinds of software deliverable items is one of the first and most dominant aspects of software cost estimating. Most of the cost estimation model or techniques usually assume that software size or structural complexity is the integral factor that influences software development effort. Although sizing and complexity measure is a very critical due to the need of reliable size estimates in the utilization of existing software project cost estimation models and complex problem for software cost estimating, advances in sizing technology over the past 30 years have been impressive. This paper attempts to review the 12 object-oriented software metrics proposed in 90s’ by Chidamber, Kemerer and Li.",
"title": ""
}
] | scidocsrr |
4cc3c9a39d8ff4e4b6c746b82af187d9 | Solving real-world cutting stock-problems in the paper industry: Mathematical approaches, experience and challenges | [
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] | [
{
"docid": "f0a3a1855103ebac224e1351d4fc24df",
"text": "BACKGROUND\nThere have been many randomised trials of adjuvant tamoxifen among women with early breast cancer, and an updated overview of their results is presented.\n\n\nMETHODS\nIn 1995, information was sought on each woman in any randomised trial that began before 1990 of adjuvant tamoxifen versus no tamoxifen before recurrence. Information was obtained and analysed centrally on each of 37000 women in 55 such trials, comprising about 87% of the worldwide evidence. Compared with the previous such overview, this approximately doubles the amount of evidence from trials of about 5 years of tamoxifen and, taking all trials together, on events occurring more than 5 years after randomisation.\n\n\nFINDINGS\nNearly 8000 of the women had a low, or zero, level of the oestrogen-receptor protein (ER) measured in their primary tumour. Among them, the overall effects of tamoxifen appeared to be small, and subsequent analyses of recurrence and total mortality are restricted to the remaining women (18000 with ER-positive tumours, plus nearly 12000 more with untested tumours, of which an estimated 8000 would have been ER-positive). For trials of 1 year, 2 years, and about 5 years of adjuvant tamoxifen, the proportional recurrence reductions produced among these 30000 women during about 10 years of follow-up were 21% (SD 3), 29% (SD 2), and 47% (SD 3), respectively, with a highly significant trend towards greater effect with longer treatment (chi2(1)=52.0, 2p<0.00001). The corresponding proportional mortality reductions were 12% (SD 3), 17% (SD 3), and 26% (SD 4), respectively, and again the test for trend was significant (chi2(1) = 8.8, 2p=0.003). The absolute improvement in recurrence was greater during the first 5 years, whereas the improvement in survival grew steadily larger throughout the first 10 years. The proportional mortality reductions were similar for women with node-positive and node-negative disease, but the absolute mortality reductions were greater in node-positive women. In the trials of about 5 years of adjuvant tamoxifen the absolute improvements in 10-year survival were 10.9% (SD 2.5) for node-positive (61.4% vs 50.5% survival, 2p<0.00001) and 5.6% (SD 1.3) for node-negative (78.9% vs 73.3% survival, 2p<0.00001). These benefits appeared to be largely irrespective of age, menopausal status, daily tamoxifen dose (which was generally 20 mg), and of whether chemotherapy had been given to both groups. In terms of other outcomes among all women studied (ie, including those with \"ER-poor\" tumours), the proportional reductions in contralateral breast cancer were 13% (SD 13), 26% (SD 9), and 47% (SD 9) in the trials of 1, 2, or about 5 years of adjuvant tamoxifen. The incidence of endometrial cancer was approximately doubled in trials of 1 or 2 years of tamoxifen and approximately quadrupled in trials of 5 years of tamoxifen (although the number of cases was small and these ratios were not significantly different from each other). The absolute decrease in contralateral breast cancer was about twice as large as the absolute increase in the incidence of endometrial cancer. Tamoxifen had no apparent effect on the incidence of colorectal cancer or, after exclusion of deaths from breast or endometrial cancer, on any of the other main categories of cause of death (total nearly 2000 such deaths; overall relative risk 0.99 [SD 0.05]).\n\n\nINTERPRETATION\nFor women with tumours that have been reliably shown to be ER-negative, adjuvant tamoxifen remains a matter for research. However, some years of adjuvant tamoxifen treatment substantially improves the 10-year survival of women with ER-positive tumours and of women whose tumours are of unknown ER status, with the proportional reductions in breast cancer recurrence and in mortality appearing to be largely unaffected by other patient characteristics or treatments.",
"title": ""
},
{
"docid": "3a32ac999ea003d992f3dd7d7d41d601",
"text": "Collectively, disruptive technologies and market forces have resulted in a significant shift in the structure of many industries, presenting a serious challenge to near-term profitability and long-term viability. Cloud capabilities continue to promise payoffs in reduced costs and increased efficiencies, but in this article, we show they can provide business model transformation opportunities as well. To date, the focus of much research on cloud computing and cloud services has been on understanding the technology challenges, business opportunities or applications for particular domains.3 Cloud services, however, also offer great new opportunities for small and mediumsized enterprises (SMEs) that lack large IT shops or internal capabilities, as well as larger firms. An early analysis of four SMEs4 found that cloud services can offer both economic and business operational value previously denied them. This distinction is important because it shows that cloud services can provide value beyond simple cost avoidance or reduction",
"title": ""
},
{
"docid": "ebaedd43e151f13d1d4d779284af389d",
"text": "This paper presents the state of art techniques in recommender systems (RS). The various techniques are diagrammatically illustrated which on one hand helps a naïve researcher in this field to accommodate the on-going researches and establish a strong base, on the other hand it focuses on different categories of the recommender systems with deep technical discussions. The review studies on RS are highlighted which helps in understanding the previous review works and their directions. 8 different main categories of recommender techniques and 19 sub categories have been identified and stated. Further, soft computing approach for recommendation is emphasized which have not been well studied earlier. The major problems of the existing area is reviewed and presented from different perspectives. However, solutions to these issues are rarely discussed in the previous works, in this study future direction for possible solutions are also addressed.",
"title": ""
},
{
"docid": "1f94d244dd24bd9261613098c994cf9d",
"text": "With the development and introduction of smart metering, the energy information for costumers will change from infrequent manual meter readings to fine-grained energy consumption data. On the one hand these fine-grained measurements will lead to an improvement in costumers' energy habits, but on the other hand the fined-grained data produces information about a household and also households' inhabitants, which are the basis for many future privacy issues. To ensure household privacy and smart meter information owned by the household inhabitants, load hiding techniques were introduced to obfuscate the load demand visible at the household energy meter. In this work, a state-of-the-art battery-based load hiding (BLH) technique, which uses a controllable battery to disguise the power consumption and a novel load hiding technique called load-based load hiding (LLH) are presented. An LLH system uses an controllable household appliance to obfuscate the household's power demand. We evaluate and compare both load hiding techniques on real household data and show that both techniques can strengthen household privacy but only LLH can increase appliance level privacy.",
"title": ""
},
{
"docid": "7e42516a73e8e5f80d009d0ff305156c",
"text": "This article provides a review of evolutionary theory and empirical research on mate choices in nonhuman species and uses it as a frame for understanding the how and why of human mate choices. The basic principle is that the preferred mate choices and attendant social cognitions and behaviors of both women and men, and those of other species, have evolved to focus on and exploit the reproductive potential and reproductive investment of members of the opposite sex. Reproductive potential is defined as the genetic, material, and/or social resources an individual can invest in offspring, and reproductive investment is the actual use of these resources to enhance the physical and social well- being of offspring. Similarities and differences in the mate preferences and choices of women and men are reviewed and can be understood in terms of similarities and differences in the form of reproductive potential that women and men have to offer and their tendency to use this potential for the well-being of children.",
"title": ""
},
{
"docid": "caea6d9ec4fbaebafc894167cfb8a3d6",
"text": "Although the positive effects of different kinds of physical activity (PA) on cognitive functioning have already been demonstrated in a variety of studies, the role of cognitive engagement in promoting children's executive functions is still unclear. The aim of the current study was therefore to investigate the effects of two qualitatively different chronic PA interventions on executive functions in primary school children. Children (N = 181) aged between 10 and 12 years were assigned to either a 6-week physical education program with a high level of physical exertion and high cognitive engagement (team games), a physical education program with high physical exertion but low cognitive engagement (aerobic exercise), or to a physical education program with both low physical exertion and low cognitive engagement (control condition). Executive functions (updating, inhibition, shifting) and aerobic fitness (multistage 20-m shuttle run test) were measured before and after the respective condition. Results revealed that both interventions (team games and aerobic exercise) have a positive impact on children's aerobic fitness (4-5% increase in estimated VO2max). Importantly, an improvement in shifting performance was found only in the team games and not in the aerobic exercise or control condition. Thus, the inclusion of cognitive engagement in PA seems to be the most promising type of chronic intervention to enhance executive functions in children, providing further evidence for the importance of the qualitative aspects of PA.",
"title": ""
},
{
"docid": "461fbb108d5589621a7ff15fcc306153",
"text": "Current methods for detector gain calibration require acquisition of tens of special calibration images. Here we propose a method that obtains the gain from the actual image for which the photon count is desired by quantifying out-of-band information. We show on simulation and experimental data that our much simpler procedure, which can be retroactively applied to any image, is comparable in precision to traditional gain calibration procedures. Optical recordings consist of detected photons, which typically arrive in an uncorrelated manner at the detector. Therefore the recorded intensity follows a Poisson distribution, where the variance of the photon count is equal to its mean. In many applications images must be further processed based on these statistics and it is therefore of great importance to be able to relate measured values S in analogue-to-digital-units (ADU) to the detected (effective) photon numbers N. The relation between the measured signal S in ADU and the photon count N is given by the linear gain g as S = gN. Only after conversion to photons is it possible to establish the expected precision of intensities in the image, which is essential for single particle localization, maximum-likelihood image deconvolution or denoising [Ober2004, Smith2010, Afanasyev2015, Strohl2015]. The photon count must be established via gain calibration, as most image capturing devices do not directly report the number of detected photons, but a value proportional to the photoelectron charge produced in a photomultiplier tube or collected in a camera pixel. For this calibration typically tens of calibration images are recorded and the linear relationship between mean intensity and its variance is exploited [vanVliet1998]. In current microscopy practise a detector calibration to photon counts is often not done but cannot be performed in retrospect. It thus would be extremely helpful, if that can be determined from analysing the acquisition itself – a single image. A number of algorithms have been published for Gaussian type noise [Donoho1995, Immerkaer1996] and Poissonian type noise [Foi2008, Colom2014, Azzari2014, Pyatykh2014]. However, all these routines use assumed image properties to extract the information rather than just the properties of the acquisition process as in our presented algorithm. This has major implications for their performance on microscopy images governed by photon statistics (see Supplementary Information for a comparison with implementations from Pyatykh et al. [Pyatykh2014] and Azzari et al. [Azzari2014] which performed more than an order of magnitude worse than our method). Some devices, such as avalanche photodiodes, photomultiplier tubes (PMTs) or emCCD cameras can be operated in a single photon counting mode [Chao2013] where the gain is known to be one. In many cases, however, the gain is unknown and/or a device setting. For example, the gain of PMTs can be continuously controlled by changing the voltage between the dynodes and the gain of cameras may deviate from the value stated in the manual. To complicate matters, devices not running in photon counting mode, use an offset Ozero to avoid negative readout values, i.e. the device will yield a non-zero mean value even if no light reaches the detector, S = gN + Ozero. This offset value Ozero is sometimes changing over time (“offset drift”). Traditionally, a series of about 20 dark images and 20 images of a sample with smoothly changing intensity are recorded [vanVliet1998]. From these images the gain is calculated as the linear slope of the variance over these images versus the mean intensity g = var(S)/mean(S) (for details see Supplementary information). In Figure 1 we show a typical calibration curve by fitting (blue line) the experimentally obtained data (blue crosses). The obtained gain does not necessarily correspond to the real gain per detected photon, since it includes multiplicative noise sources such as multiplicative amplification noise, gain fluctuations or the excess noise of emCCDs and PMTs. In addition there is also readout noise, which includes thermal noise build-up and clock induced charge. The unknown readout noise and offset may seem at first glance disadvantageous regarding an automatic quantification. However, as shown below, these details do not matter for the purpose of predicting the correct noise from a measured signal. Let us first assume that we know the offset Ozero and any potential readout noise variance Vread. The region in Fourier space above the cut-off frequency of the support of the optical transfer function only contains noise in an image [Liu2017], where both Poisson and Gaussian noise are evenly distributed over all frequencies [Chanran1990, Liu2017]. By measuring the spectral power density of the noise VHF in this high-frequency out-of-band region and accounting for the area fraction f of this region in Fourier space, we can estimate the total variance Vall=VHF/f of all detected photons. Then the gain g is then obtained as (1) g = !!\"\"!!!\"#$ (!!!!\"#$) where we relate the photon-noise-only variance Vall-Vread to the sum offset-corrected signal over all pixels in the image (see Online Methods). The device manufacturers usually provide the readout noise leaving only the offset and gain to be determined from the image itself in practise. To also estimate both, the offset together with the gain, we need more information from the linear meanvariance dependence than given by equation (1). We achieve this by tiling the input image, e.g. into 3×3 sub-images, and process each of these sub-images to generate one data point in a meanvariance plot. From these nine data points we obtain the axis offset (Ono-noise). We then perform the gain estimation (1) on the whole image after offset correction (See Online Methods and Supplementary Information). As seen from Figure 1 the linear regression of the mean-variance curve determines the axis offset ADU value Ono-noise at which zero noise would be expected. Yet we cannot simultaneously determine both offset Ozero and readout noise Vread . If either of them is known a priori, the other can be calculated: Vread = g(Ozero Ono-noise), which is, however, not needed to predict the correct noise level for each brightness level based on the automatically determined value Ono-noise. To test the single-image gain calibration, simulated data was generated for a range of gains (0.1, 1, 10) with a constant offset (100 ADU), a range of readout noise (1, 2, 10 photon RMS) and maximum expected photon counts per pixel (10, 100, ..., 10). Offset and gain were both determined from band-limited single images of two different objects (resolution target and Einstein) without significant relative errors in the offset or gain (less than 2% at more than 10 expected maximum photon counts) using the proposed method (see Supplementary Figures S1-S3). Figure 1 quantitatively compares the intensity dependent variance predicted by applying our method individually to many single experimental in-focus images (shaded green area) with the classical method evaluating a whole series of calibration images (blue line). Note that our single-image based noise determination does not require any prior knowledge about offset or readout noise. Figure 2 shows a few typical example images acquired with various detectors together with the gain and offset determined from each of them and the calibration values obtained from the standard procedure for comparison. We evaluated the general applicability of our method on datasets from different detectors and modes of acquisition (CCD, emCCD, sCMOS, PMT, GAsP and Hybrid Detector). Figure 3 quantitatively compares experimental single image calibration with classical calibration. 20 individual images were each submitted to our algorithm and the determined offset and gain was compared to the classical method. The variance of a separately acquired dark image was submitted to the algorithm as a readout noise estimate, but alternatively the readout noise specification from the handbook could be used or a measured offset at zero intensity. As seen from Figure 3 the singleimage-based gain calibration as proposed performs nearly as well as the standard gain calibration using 20 images. The relative gain error stays generally well below 10% and for cameras below 2%. The 8.5% bias for the HyD photon counting system is unusually high, and we were unable to find a clear explanation for this deviation from the classical calibration. Using only lower frequencies to estimate VHF (kt =0.4) resulted in a much smaller error of 2.5% in the single-photon counting case suggesting that dead-time effects of the detector might have affected the high spatial frequencies. Simulations as well as experiments show a good agreement of the determined gain with the ground truth or gold standard calibration respectively. The bias of the gain determined by the single-image routine stayed below 4% (except for HyD). For intensity quantification any potential offset must be subtracted before conversion to photon counts. Our method estimates the photon count very precisely over a large range of parameters (relative error below 2% in simulations). Our method could be applied to many different microscopy modes (widefield transmission, fluorescence, and confocal) and detector types (CCD, emCCD, sCMOS, PMT, GAsP and HyD photon counting), because we only require the existence of an out-of-band region, which purely contains frequency independent noise. This is usually true, if the image is sampled correctly. As discussed in the Supplementary Information the cut-off limit of our algorithm can in practise be set below the transfer limit and single-image calibration can even outperform the standard calibration if molecular blinking perturbs the measurement In summary we showed that single image calibration is a simple and versatile tool. We expect our work to lead to a better ability to quantify intensities in general.",
"title": ""
},
{
"docid": "8fac18c1285875aee8e7a366555a4ca3",
"text": "Automatic speech recognition (ASR) has been under the scrutiny of researchers for many years. Speech Recognition System is the ability to listen what we speak, interpreter and perform actions according to spoken information. After so many detailed study and optimization of ASR and various techniques of features extraction, accuracy of the system is still a big challenge. The selection of feature extraction techniques is completely based on the area of study. In this paper, a detailed theory about features extraction techniques like LPC and LPCC is examined. The goal of this paper is to study the comparative analysis of features extraction techniques like LPC and LPCC.",
"title": ""
},
{
"docid": "fa246c15531c6426cccaf4d216dc8375",
"text": "Proboscis lateralis is a rare craniofacial malformation characterized by absence of nasal cavity on one side with a trunk-like nasal appendage protruding from superomedial portion of the ipsilateral orbit. High-resolution computed tomography and magnetic resonance imaging are extremely useful in evaluating this congenital condition and the wide spectrum of associated anomalies occurring in the surrounding anatomical regions and brain. We present a case of proboscis lateralis in a 2-year-old girl with associated ipsilateral sinonasal aplasia, orbital cyst, absent olfactory bulb and olfactory tract. Absence of ipsilateral olfactory pathway in this rare disorder has been documented on high-resolution computed tomography and magnetic resonance imaging by us for the first time in English medical literature.",
"title": ""
},
{
"docid": "db7edbb1a255e9de8486abbf466f9583",
"text": "Nowadays, adopting an optimized irrigation system has become a necessity due to the lack of the world water resource. The system has a distributed wireless network of soil-moisture and temperature sensors. This project focuses on a smart irrigation system which is cost effective. As the technology is growing and changing rapidly, Wireless sensing Network (WSN) helps to upgrade the technology where automation is playing important role in human life. Automation allows us to control various appliances automatically. DC motor based vehicle is designed for irrigation purpose. The objectives of this paper were to control the water supply to each plant automatically depending on values of temperature and soil moisture sensors. Mechanism is done such that soil moisture sensor electrodes are inserted in front of each soil. It also monitors the plant growth using various parameters like height and width. Android app.",
"title": ""
},
{
"docid": "cce5d75bfcfc22f7af08f6b0b599d472",
"text": "In order to determine if exposure to carcinogens in fire smoke increases the risk of cancer, we examined the incidence of cancer in a cohort of 2,447 male firefighters in Seattle and Tacoma, (Washington, USA). The study population was followed for 16 years (1974–89) and the incidence of cancer, ascertained using a population-based tumor registry, was compared with local rates and with the incidence among 1,878 policemen from the same cities. The risk of cancer among firefighters was found to be similar to both the police and the general male population for most common sites. An elevated risk of prostate cancer was observed relative to the general population (standardized incidence ratio [SIR]=1.4, 95 percent confidence interval [CI]=1.1–1.7) but was less elevated compared with rates in policement (incidence density ratio [IDR]=1.1, CI=0.7–1.8) and was not related to duration of exposure. The risk of colon cancer, although only slightly elevated relative to the general population (SIR=1.1, CI=0.7–1.6) and the police (IDR=1.3, CI=0.6–3.0), appeared to increase with duration of employment. Although the relationship between firefighting and colon cancer is consistent with some previous studies, it is based on small numbers and may be due to chance. While this study did not find strong evidence for an excess risk of cancer, the presence of carcinogens in the firefighting environment warrants periodic re-evaluation of cancer incidence in this population and the continued use of protective equipment.",
"title": ""
},
{
"docid": "e28336bccbb1414dc9a92404f08b6b6f",
"text": "YouTube has become one of the largest websites on the Internet. Among its many genres, both professional and amateur science communicators compete for audience attention. This article provides the first overview of science communication on YouTube and examines content factors that affect the popularity of science communication videos on the site. A content analysis of 390 videos from 39 YouTube channels was conducted. Although professionally generated content is superior in number, user-generated content was significantly more popular. Furthermore, videos that had consistent science communicators were more popular than those without a regular communicator. This study represents an important first step to understand content factors, which increases the channel and video popularity of science communication on YouTube.",
"title": ""
},
{
"docid": "4b544bb34c55e663cdc5f0a05201e595",
"text": "BACKGROUND\nThis study seeks to examine a multidimensional model of student motivation and engagement using within- and between-network construct validation approaches.\n\n\nAIMS\nThe study tests the first- and higher-order factor structure of the motivation and engagement wheel and its corresponding measurement tool, the Motivation and Engagement Scale - High School (MES-HS; formerly the Student Motivation and Engagement Scale).\n\n\nSAMPLE\nThe study draws upon data from 12,237 high school students from 38 Australian high schools.\n\n\nMETHODS\nThe hypothesized 11-factor first-order structure and the four-factor higher-order structure, their relationship with a set of between-network measures (class participation, enjoyment of school, educational aspirations), factor invariance across gender and year-level, and the effects of age and gender are examined using confirmatory factor analysis and structural equation modelling.\n\n\nRESULTS\nIn terms of within-network validity, (1) the data confirm that the 11-factor and higher-order factor models of motivation and engagement are good fitting and (2) multigroup tests showed invariance across gender and year levels. In terms of between-network validity, (3) correlations with enjoyment of school, class participation and educational aspirations are in the hypothesized directions, and (4) girls reflect a more adaptive pattern of motivation and engagement, and year-level findings broadly confirm hypotheses that middle high school students seem to reflect a less adaptive pattern of motivation and engagement.\n\n\nCONCLUSION\nThe first- and higher-order structures hold direct implications for educational practice and directions for future motivation and engagement research.",
"title": ""
},
{
"docid": "c1ddf32bfa71f32e51daf31e077a87cd",
"text": "There is a step of significant difficulty experienced by brain-computer interface (BCI) users when going from the calibration recording to the feedback application. This effect has been previously studied and a supervised adaptation solution has been proposed. In this paper, we suggest a simple unsupervised adaptation method of the linear discriminant analysis (LDA) classifier that effectively solves this problem by counteracting the harmful effect of nonclass-related nonstationarities in electroencephalography (EEG) during BCI sessions performed with motor imagery tasks. For this, we first introduce three types of adaptation procedures and investigate them in an offline study with 19 datasets. Then, we select one of the proposed methods and analyze it further. The chosen classifier is offline tested in data from 80 healthy users and four high spinal cord injury patients. Finally, for the first time in BCI literature, we apply this unsupervised classifier in online experiments. Additionally, we show that its performance is significantly better than the state-of-the-art supervised approach.",
"title": ""
},
{
"docid": "40a87654ac33c46f948204fd5c7ef4c1",
"text": "We introduce a novel scheme to train binary convolutional neural networks (CNNs) – CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.",
"title": ""
},
{
"docid": "5c96222feacb0454d353dcaa1f70fb83",
"text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1",
"title": ""
},
{
"docid": "750abc9e51aed62305187d7103e3f267",
"text": "This design paper presents new guidance for creating map legends in a dynamic environment. Our contribution is a set ofguidelines for legend design in a visualization context and a series of illustrative themes through which they may be expressed. Theseare demonstrated in an applications context through interactive software prototypes. The guidelines are derived from cartographicliterature and in liaison with EDINA who provide digital mapping services for UK tertiary education. They enhance approaches tolegend design that have evolved for static media with visualization by considering: selection, layout, symbols, position, dynamismand design and process. Broad visualization legend themes include: The Ground Truth Legend, The Legend as Statistical Graphicand The Map is the Legend. Together, these concepts enable us to augment legends with dynamic properties that address specificneeds, rethink their nature and role and contribute to a wider re-evaluation of maps as artifacts of usage rather than statements offact. EDINA has acquired funding to enhance their clients with visualization legends that use these concepts as a consequence ofthis work. The guidance applies to the design of a wide range of legends and keys used in cartography and information visualization.",
"title": ""
},
{
"docid": "5433a8e449bf4bf9d939e645e171f7e5",
"text": "Software Testing (ST) processes attempt to verify and validate the capability of a software system to meet its required attributes and functionality. As software systems become more complex, the need for automated software testing methods emerges. Machine Learning (ML) techniques have shown to be quite useful for this automation process. Various works have been presented in the junction of ML and ST areas. The lack of general guidelines for applying appropriate learning methods for software testing purposes is our major motivation in this current paper. In this paper, we introduce a classification framework which can help to systematically review research work in the ML and ST domains. The proposed framework dimensions are defined using major characteristics of existing software testing and machine learning methods. Our framework can be used to effectively construct a concrete set of guidelines for choosing the most appropriate learning method and applying it to a distinct stage of the software testing life-cycle for automation purposes.",
"title": ""
},
{
"docid": "4a84fabb0b4edefc1850940ed2081f47",
"text": "Given a large overcomplete dictionary of basis vectors, the goal is to simultaneously represent L>1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where multiple responses exist that were putatively generated by the same small subset of features. Ideally, the associated sparse generating weights should be recovered, which can have physical significance in many applications (e.g., source localization). The generic solution to this problem is intractable and, therefore, approximate procedures are sought. Based on the concept of automatic relevance determination, this paper uses an empirical Bayesian prior to estimate a convenient posterior distribution over candidate basis vectors. This particular approximation enforces a common sparsity profile and consistently places its prominent posterior mass on the appropriate region of weight-space necessary for simultaneous sparse recovery. The resultant algorithm is then compared with multiple response extensions of matching pursuit, basis pursuit, FOCUSS, and Jeffreys prior-based Bayesian methods, finding that it often outperforms the others. Additional motivation for this particular choice of cost function is also provided, including the analysis of global and local minima and a variational derivation that highlights the similarities and differences between the proposed algorithm and previous approaches.",
"title": ""
},
{
"docid": "1720517b913ce3974ab92239ff8a177e",
"text": "Honeypot is a closely monitored computer resource that emulates behaviors of production host within a network in order to lure and attract the attackers. The workability and effectiveness of a deployed honeypot depends on its technical configuration. Since honeypot is a resource that is intentionally made attractive to the attackers, it is crucial to make it intelligent and self-manageable. This research reviews at artificial intelligence techniques such as expert system and case-based reasoning, in order to build an intelligent honeypot.",
"title": ""
}
] | scidocsrr |
9ab20062b846a737c67c08bed9fe8e3c | Semantic Word Clusters Using Signed Spectral Clustering | [
{
"docid": "37f0bea4c677cfb7b931ab174d4d20c7",
"text": "A persistent problem of psychology has been how to deal conceptually with patterns of interdependent properties. This problem has been central, of course, in the theoretical treatment by Gestalt psychologists of phenomenal or neural configurations or fields (12, 13, 15). It has also been of concern to social psychologists and sociologists who attempt to employ concepts referring to social systems (18). Heider (19), reflecting the general field-theoretical approach, has considered certain aspects of cognitive fields which contain perceived people and impersonal objects or events. His analysis focuses upon what he calls the P-O-X unit of a cognitive field, consisting of P (one person), 0 (another person), and X (an impersonal entity). Each relation among the parts of the unit is conceived as interdependent with each other relation. Thus, for example, if P has a relation of affection for 0 and if 0 is seen as responsible for X, then there will be a tendency for P to like or approve of X. If the nature of X is such that it would \"normally\" be evaluated as bad, the whole P-O-X unit is placed in a state of imbalance, and pressures",
"title": ""
},
{
"docid": "d46af3854769569a631fab2c3c7fa8f3",
"text": "Existing vector space models typically map synonyms and antonyms to similar word vectors, and thus fail to represent antonymy. We introduce a new vector space representation where antonyms lie on opposite sides of a sphere: in the word vector space, synonyms have cosine similarities close to one, while antonyms are close to minus one. We derive this representation with the aid of a thesaurus and latent semantic analysis (LSA). Each entry in the thesaurus – a word sense along with its synonyms and antonyms – is treated as a “document,” and the resulting document collection is subjected to LSA. The key contribution of this work is to show how to assign signs to the entries in the co-occurrence matrix on which LSA operates, so as to induce a subspace with the desired property. We evaluate this procedure with the Graduate Record Examination questions of (Mohammed et al., 2008) and find that the method improves on the results of that study. Further improvements result from refining the subspace representation with discriminative training, and augmenting the training data with general newspaper text. Altogether, we improve on the best previous results by 11 points absolute in F measure.",
"title": ""
}
] | [
{
"docid": "0f11d0d1047a79ee63896f382ae03078",
"text": "Much of the visual cortex is organized into visual field maps: nearby neurons have receptive fields at nearby locations in the image. Mammalian species generally have multiple visual field maps with each species having similar, but not identical, maps. The introduction of functional magnetic resonance imaging made it possible to identify visual field maps in human cortex, including several near (1) medial occipital (V1,V2,V3), (2) lateral occipital (LO-1,LO-2, hMT+), (3) ventral occipital (hV4, VO-1, VO-2), (4) dorsal occipital (V3A, V3B), and (5) posterior parietal cortex (IPS-0 to IPS-4). Evidence is accumulating for additional maps, including some in the frontal lobe. Cortical maps are arranged into clusters in which several maps have parallel eccentricity representations, while the angular representations within a cluster alternate in visual field sign. Visual field maps have been linked to functional and perceptual properties of the visual system at various spatial scales, ranging from the level of individual maps to map clusters to dorsal-ventral streams. We survey recent measurements of human visual field maps, describe hypotheses about the function and relationships between maps, and consider methods to improve map measurements and characterize the response properties of neurons comprising these maps.",
"title": ""
},
{
"docid": "becda89fbb882f4da57a82441643bb99",
"text": "During the nonbreeding season, adult Anna and black-chinned hummingbirds (Calypte anna and Archilochus alexandri) have lower defense costs and more exclusive territories than juveniles. Adult C. anna are victorious over juveniles in aggressive encounters, and tend to monopolize the most temporally predictable resources. Juveniles are more successful than adults at stealing food from territories (the primary alternative to territoriality), presumably because juveniles are less brightly colored. Juveniles have lighter wing disc loading than adults, and consequently should have lower rates of energy expenditure during flight. Reduced flight expenditures may be more important for juveniles because their foraging strategy requires large amounts of flight time. These results support the contention of the asymmetry hypothesis that dominance can result from a contested resource being more valuable to one contestant than to the other. Among juveniles, defence costs are also negatively correlated with age and coloration; amount of conspicucus coloration is negatively correlated with the number of bill striations, an inverse measure of age.",
"title": ""
},
{
"docid": "eaf1c419853052202cb90246e48a3697",
"text": "The objective of this document is to promote the use of dynamic daylight performance measures for sustainable building design. The paper initially explores the shortcomings of conventional, static daylight performance metrics which concentrate on individual sky conditions, such as the common daylight factor. It then provides a review of previously suggested dynamic daylight performance metrics, discussing the capability of these metrics to lead to superior daylighting designs and their accessibility to nonsimulation experts. Several example offices are examined to demonstrate the benefit of basing design decisions on dynamic performance metrics as opposed to the daylight factor. Keywords—–daylighting, dynamic, metrics, sustainable buildings",
"title": ""
},
{
"docid": "7046221ad9045cb464f65666c7d1a44e",
"text": "OBJECTIVES\nWe analyzed differences in pediatric elevated blood lead level incidence before and after Flint, Michigan, introduced a more corrosive water source into an aging water system without adequate corrosion control.\n\n\nMETHODS\nWe reviewed blood lead levels for children younger than 5 years before (2013) and after (2015) water source change in Greater Flint, Michigan. We assessed the percentage of elevated blood lead levels in both time periods, and identified geographical locations through spatial analysis.\n\n\nRESULTS\nIncidence of elevated blood lead levels increased from 2.4% to 4.9% (P < .05) after water source change, and neighborhoods with the highest water lead levels experienced a 6.6% increase. No significant change was seen outside the city. Geospatial analysis identified disadvantaged neighborhoods as having the greatest elevated blood lead level increases and informed response prioritization during the now-declared public health emergency.\n\n\nCONCLUSIONS\nThe percentage of children with elevated blood lead levels increased after water source change, particularly in socioeconomically disadvantaged neighborhoods. Water is a growing source of childhood lead exposure because of aging infrastructure.",
"title": ""
},
{
"docid": "10b94bdea46ff663dd01291c5dac9e9f",
"text": "The notion of an instance is ubiquitous in knowledge representations for domain modeling. Most languages used for domain modeling offer syntactic or semantic restrictions on specific language constructs that distinguish individuals and classes in the application domain. The use, however, of instances and classes to represent domain entities has been driven by concerns that range from the strictly practical (e.g. the exploitation of inheritance) to the vaguely philosophical (e.g. intuitive notions of intension and extension). We demonstrate the importance of establishing a clear ontological distinction between instances and classes, and then show modeling scenarios where a single object may best be viewed as a class and an instance. To avoid ambiguous interpretations of such objects, it is necessary to introduce separate universes of discourse in which the same object exists in different forms. We show that a limited facility to support this notion exists in modeling languages like Smalltalk and CLOS, and argue that a more general facility should be made explicit in modeling languages.",
"title": ""
},
{
"docid": "b72f4554f2d7ac6c5a8000d36a099e67",
"text": "Sign Language Recognition (SLR) has been an active research field for the last two decades. However, most research to date has considered SLR as a naive gesture recognition problem. SLR seeks to recognize a sequence of continuous signs but neglects the underlying rich grammatical and linguistic structures of sign language that differ from spoken language. In contrast, we introduce the Sign Language Translation (SLT) problem. Here, the objective is to generate spoken language translations from sign language videos, taking into account the different word orders and grammar. We formalize SLT in the framework of Neural Machine Translation (NMT) for both end-to-end and pretrained settings (using expert knowledge). This allows us to jointly learn the spatial representations, the underlying language model, and the mapping between sign and spoken language. To evaluate the performance of Neural SLT, we collected the first publicly available Continuous SLT dataset, RWTH-PHOENIX-Weather 2014T1. It provides spoken language translations and gloss level annotations for German Sign Language videos of weather broadcasts. Our dataset contains over .95M frames with >67K signs from a sign vocabulary of >1K and >99K words from a German vocabulary of >2.8K. We report quantitative and qualitative results for various SLT setups to underpin future research in this newly established field. The upper bound for translation performance is calculated at 19.26 BLEU-4, while our end-to-end frame-level and gloss-level tokenization networks were able to achieve 9.58 and 18.13 respectively.",
"title": ""
},
{
"docid": "5d9b29c10d878d288a960ae793f2366e",
"text": "We propose a new bandgap reference topology for supply voltages as low as one diode drop (~0.8V). In conventional low-voltage references, supply voltage is limited by the generated reference voltage. Also, the proposed topology generates the reference voltage at the output of the feedback amplifier. This eliminates the need for an additional output buffer, otherwise required in conventional topologies. With the bandgap core biased from the reference voltage, the new topology is also suitable for a low-voltage shunt reference. We fabricated a 1V, 0.35mV/degC reference occupying 0.013mm2 in a 90nm CMOS process",
"title": ""
},
{
"docid": "de630d018f3ff24fad06976e8dc390fa",
"text": "A critical first step in navigation of unmanned aerial vehicles is the detection of the horizon line. This information can be used for adjusting flight parameters, attitude estimation as well as obstacle detection and avoidance. In this paper, a fast and robust technique for precise detection of the horizon is presented. Our approach is to apply convolutional neural networks to the task, training them to detect the sky and ground regions as well as the horizon line in flight videos. Thorough experiments using large datasets illustrate the significance and accuracy of this technique for various types of terrain as well as seasonal conditions.",
"title": ""
},
{
"docid": "cb47cc2effac1404dd60a91a099699d1",
"text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.",
"title": ""
},
{
"docid": "ac1302f482309273d9e61fdf0f093e01",
"text": "Retinal vessel segmentation is an indispensable step for automatic detection of retinal diseases with fundoscopic images. Though many approaches have been proposed, existing methods tend to miss fine vessels or allow false positives at terminal branches. Let alone undersegmentation, over-segmentation is also problematic when quantitative studies need to measure the precise width of vessels. In this paper, we present a method that generates the precise map of retinal vessels using generative adversarial training. Our methods achieve dice coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "af5cd4c5325db5f7d9131b7a7ba12ba5",
"text": "Understanding unstructured text in e-commerce catalogs is important for product search and recommendations. In this paper, we tackle the product discovery problem for fashion e-commerce catalogs where each input listing text consists of descriptions of one or more products; each with its own set of attributes. For instance, [this RED printed short top paired with blue jeans makes you go green] contains two products: item top with attributes {pattern=printed, length=short, brand=RED} and item jeans with attributes {color=blue}. The task of product discovery is rendered quite challenging due to the complexity of fashion dictionary (e.g. RED is a brand or green is a metaphor) added to the difficulty of associating attributes to appropriate items (e.g. associating RED brand with item top). Beyond classical attribute extraction task, product discovery entails parsing multi-sentence listings to tag new items and attributes unknown to the underlying schema; at the same time, associating attributes to relevant items to form meaningful products. Towards solving this problem, we propose a novel composition of sequence labeling and multi-task learning as an end-to-end trainable deep neural architecture. We systematically evaluate our approach on one of the largest tagged datasets in e-commerce consisting of 25K listings labeled at word-level. Given 23 labels, we discover label-values with F1 score of 92.2%. To our knowledge, this is the first work to tackle product discovery and show effectiveness of neural architectures on a complex dataset that goes beyond popular datasets for POS tagging and NER.",
"title": ""
},
{
"docid": "e1b69d4f2342a90b52215927f727421b",
"text": "We present an inertial sensor based monitoring system for measuring upper limb movements in real time. The purpose of this study is to develop a motion tracking device that can be integrated within a home-based rehabilitation system for stroke patients. Human upper limbs are represented by a kinematic chain in which there are four joint variables to be considered: three for the shoulder joint and one for the elbow joint. Kinematic models are built to estimate upper limb motion in 3-D, based on the inertial measurements of the wrist motion. An efficient simulated annealing optimisation method is proposed to reduce errors in estimates. Experimental results demonstrate the proposed system has less than 5% errors in most motion manners, compared to a standard motion tracker.",
"title": ""
},
{
"docid": "303098fa8e5ccd7cf50a955da7e47f2e",
"text": "This paper describes the SALSA corpus, a large German corpus manually annotated with role-semantic information, based on the syntactically annotated TIGER newspaper corpus (Brants et al., 2002). The first release, comprising about 20,000 annotated predicate instances (about half the TIGER corpus), is scheduled for mid-2006. In this paper we discuss the frame-semantic annotation framework and its cross-lingual applicability, problems arising from exhaustive annotation, strategies for quality control, and possible applications.",
"title": ""
},
{
"docid": "647ede4f066516a0343acef725e51d01",
"text": "This work proposes a dual-polarized planar antenna; two post-wall slotted waveguide arrays with orthogonal 45/spl deg/ linearly-polarized waves interdigitally share the aperture on a single layer substrate. Uniform excitation of the two-dimensional slot array is confirmed by experiment in the 25 GHz band. The isolation between two slot arrays is also investigated in terms of the relative displacement along the radiation waveguide axis in the interdigital structure. The isolation is 33.0 dB when the relative shift of slot position between the two arrays is -0.5/spl lambda//sub g/, while it is only 12.8 dB when there is no shift. The cross-polarization level in the far field is -25.2 dB for a -0.5/spl lambda//sub g/ shift, which is almost equal to that of the isolated single polarization array. It is degraded down to -9.6 dB when there is no shift.",
"title": ""
},
{
"docid": "ddc6a5e9f684fd13aec56dc48969abc2",
"text": "During debugging, a developer must repeatedly and manually reproduce faulty behavior in order to inspect different facets of the program's execution. Existing tools for reproducing such behaviors prevent the use of debugging aids such as breakpoints and logging, and are not designed for interactive, random-access exploration of recorded behavior. This paper presents Timelapse, a tool for quickly recording, reproducing, and debugging interactive behaviors in web applications. Developers can use Timelapse to browse, visualize, and seek within recorded program executions while simultaneously using familiar debugging tools such as breakpoints and logging. Testers and end-users can use Timelapse to demonstrate failures in situ and share recorded behaviors with developers, improving bug report quality by obviating the need for detailed reproduction steps. Timelapse is built on Dolos, a novel record/replay infrastructure that ensures deterministic execution by capturing and reusing program inputs both from the user and from external sources such as the network. Dolos introduces negligible overhead and does not interfere with breakpoints and logging. In a small user evaluation, participants used Timelapse to accelerate existing reproduction activities, but were not significantly faster or more successful in completing the larger tasks at hand. Together, the Dolos infrastructure and Timelapse developer tool support systematic bug reporting and debugging practices.",
"title": ""
},
{
"docid": "0830abcb23d763c1298bf4605f81eb72",
"text": "A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGBD images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose. Our code and video are available at https://sites.google.com/view/densefusion/.",
"title": ""
},
{
"docid": "27487316cbda79a378b706d19d53178f",
"text": "Pallister-Killian syndrome (PKS) is a congenital disorder attributed to supernumerary isochromosome 12p mosaicism. Craniofacial dysmorphism, learning impairment and seizures are considered cardinal features. However, little is known regarding the seizure and epilepsy patterns in PKS. To better define the prevalence and spectrum of seizures in PKS, we studied 51 patients (39 male, 12 female; median age 4 years and 9 months; age range 7 months to 31 years) with confirmed 12p tetrasomy. Using a parent-based structured questionnaire, we collected data regarding seizure onset, frequency, timing, semiology, and medication therapy. Patients were recruited through our practice, at PKS Kids family events, and via the PKS Kids website. Epilepsy occurred in 27 (53%) with 23 (85%) of those with seizures having seizure onset prior to 3.5 years of age. Mean age at seizure onset was 2 years and 4 months. The most common seizure types were myoclonic (15/27, 56%), generalized convulsions (13/27, 48%), and clustered tonic spasms (similar to infantile spasms; 8/27, 30%). Thirteen of 27 patients with seizures (48%) had more than one seizure type with 26 out of 27 (96%) ever having taken antiepileptic medications. Nineteen of 27 (70%) continued to have seizures and 17/27 (63%) remained on antiepileptic medication. The most commonly used medications were: levetiracetam (10/27, 37%), valproic acid (10/27, 37%), and topiramate (9/27, 33%) with levetiracetam felt to be \"most helpful\" by parents (6/27, 22%). Further exploration of seizure timing, in-depth analysis of EEG recordings, and collection of MRI data to rule out confounding factors is warranted.",
"title": ""
},
{
"docid": "ffc9a5b907f67e1cedd8f9ab0b45b869",
"text": "In this brief, we study the design of a feedback and feedforward controller to compensate for creep, hysteresis, and vibration effects in an experimental piezoactuator system. First, we linearize the nonlinear dynamics of the piezoactuator by accounting for the hysteresis (as well as creep) using high-gain feedback control. Next, we model the linear vibrational dynamics and then invert the model to find a feedforward input to account vibration - this process is significantly easier than considering the complete nonlinear dynamics (which combines hysteresis and vibration effects). Afterwards, the feedforward input is augmented to the feedback-linearized system to achieve high-precision highspeed positioning. We apply the method to a piezoscanner used in an experimental atomic force microscope to demonstrate the method's effectiveness and we show significant reduction of both the maximum and root-mean-square tracking error. For example, high-gain feedback control compensates for hysteresis and creep effects, and in our case, it reduces the maximum error (compared to the uncompensated case) by over 90%. Then, at relatively high scan rates, the performance of the feedback controlled system can be improved by over 75% (i.e., reduction of maximum error) when the inversion-based feedforward input is integrated with the high-gain feedback controlled system.",
"title": ""
},
{
"docid": "4023c95464a842277e4dc62b117de8d0",
"text": "Many complex spike cells in the hippocampus of the freely moving rat have as their primary correlate the animal's location in an environment (place cells). In contrast, the hippocampal electroencephalograph theta pattern of rhythmical waves (7-12 Hz) is better correlated with a class of movements that change the rat's location in an environment. During movement through the place field, the complex spike cells often fire in a bursting pattern with an interburst frequency in the same range as the concurrent electroencephalograph theta. The present study examined the phase of the theta wave at which the place cells fired. It was found that firing consistently began at a particular phase as the rat entered the field but then shifted in a systematic way during traversal of the field, moving progressively forward on each theta cycle. This precession of the phase ranged from 100 degrees to 355 degrees in different cells. The effect appeared to be due to the fact that individual cells had a higher interburst rate than the theta frequency. The phase was highly correlated with spatial location and less well correlated with temporal aspects of behavior, such as the time after place field entry. These results have implications for several aspects of hippocampal function. First, by using the phase relationship as well as the firing rate, place cells can improve the accuracy of place coding. Second, the characteristics of the phase shift constrain the models that define the construction of place fields. Third, the results restrict the temporal and spatial circumstances under which synapses in the hippocampus could be modified.",
"title": ""
},
{
"docid": "6bc31257bfbcc9531a3acf1ec738c790",
"text": "BACKGROUND\nThe interaction of depression and anesthesia and surgery may result in significant increases in morbidity and mortality of patients. Major depressive disorder is a frequent complication of surgery, which may lead to further morbidity and mortality.\n\n\nLITERATURE SEARCH\nSeveral electronic data bases, including PubMed, were searched pairing \"depression\" with surgery, postoperative complications, postoperative cognitive impairment, cognition disorder, intensive care unit, mild cognitive impairment and Alzheimer's disease.\n\n\nREVIEW OF THE LITERATURE\nThe suppression of the immune system in depressive disorders may expose the patients to increased rates of postoperative infections and increased mortality from cancer. Depression is commonly associated with cognitive impairment, which may be exacerbated postoperatively. There is evidence that acute postoperative pain causes depression and depression lowers the threshold for pain. Depression is also a strong predictor and correlate of chronic post-surgical pain. Many studies have identified depression as an independent risk factor for development of postoperative delirium, which may be a cause for a long and incomplete recovery after surgery. Depression is also frequent in intensive care unit patients and is associated with a lower health-related quality of life and increased mortality. Depression and anxiety have been widely reported soon after coronary artery bypass surgery and remain evident one year after surgery. They may increase the likelihood for new coronary artery events, further hospitalizations and increased mortality. Morbidly obese patients who undergo bariatric surgery have an increased risk of depression. Postoperative depression may also be associated with less weight loss at one year and longer. The extent of preoperative depression in patients scheduled for lumbar discectomy is a predictor of functional outcome and patient's dissatisfaction, especially after revision surgery. General postoperative mortality is increased.\n\n\nCONCLUSIONS\nDepression is a frequent cause of morbidity in surgery patients suffering from a wide range of conditions. Depression may be identified through the use of Patient Health Questionnaire-9 or similar instruments. Counseling interventions may be useful in ameliorating depression, but should be subject to clinical trials.",
"title": ""
}
] | scidocsrr |
e0010e45735154c0088a1485a137db46 | A scalability analysis of classifiers in text categorization | [
{
"docid": "c698f7d6b487cc7c87d7ff215d7f12b2",
"text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).",
"title": ""
}
] | [
{
"docid": "f8a1ba148f564f9dcc0c57873bb5ce60",
"text": "Advances in online technologies have raised new concerns about privacy. A sample of expert household end users was surveyed concerning privacy, risk perceptions, and online behavior intentions. A new e-privacy typology consisting of privacyaware, privacy-suspicious, and privacy-active types was developed from a principal component factor analysis. Results suggest the presence of a privacy hierarchy of effects where awareness leads to suspicion, which subsequently leads to active behavior. An important finding was that privacy-active behavior that was hypothesized to increase the likelihood of online subscription and purchasing was not found to be significant. A further finding was that perceived risk had a strong negative influence on the extent to which respondents participated in online subscription and purchasing. Based on these results, a number of implications for managers and directions for future research are discussed.",
"title": ""
},
{
"docid": "c5427ac777eaa3ecf25cb96a124eddfe",
"text": "One source of difficulties when processing outdoor images is the presence of haze, fog or smoke which fades the colors and reduces the contrast of the observed objects. We introduce a novel algorithm and variants for visibility restoration from a single image. The main advantage of the proposed algorithm compared with other is its speed: its complexity is a linear function of the number of image pixels only. This speed allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera. Another advantage is the possibility to handle both color images or gray level images since the ambiguity between the presence of fog and the objects with low color saturation is solved by assuming only small objects can have colors with low saturation. The algorithm is controlled only by a few parameters and consists in: atmospheric veil inference, image restoration and smoothing, tone mapping. A comparative study and quantitative evaluation is proposed with a few other state of the art algorithms which demonstrates that similar or better quality results are obtained. Finally, an application is presented to lane-marking extraction in gray level images, illustrating the interest of the approach.",
"title": ""
},
{
"docid": "f4d060cd114ffa2c028dada876fcb735",
"text": "Mutations of SALL1 related to spalt of Drosophila have been found to cause Townes-Brocks syndrome, suggesting a function of SALL1 for the development of anus, limbs, ears, and kidneys. No function is yet known for SALL2, another human spalt-like gene. The structure of SALL2 is different from SALL1 and all other vertebrate spalt-like genes described in mouse, Xenopus, and Medaka, suggesting that SALL2-like genes might also exist in other vertebrates. Consistent with this hypothesis, we isolated and characterized a SALL2 homologous mouse gene, Msal-2. In contrast to other vertebrate spalt-like genes both SALL2 and Msal-2 encode only three double zinc finger domains, the most carboxyterminal of which only distantly resembles spalt-like zinc fingers. The evolutionary conservation of SALL2/Msal-2 suggests that two lines of sal-like genes with presumably different functions arose from an early evolutionary duplication of a common ancestor gene. Msal-2 is expressed throughout embryonic development but also in adult tissues, predominantly in brain. However, the function of SALL2/Msal-2 still needs to be determined.",
"title": ""
},
{
"docid": "e7646a79b25b2968c3c5b668d0216aa6",
"text": "In this paper, an image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions. Low-level features describing the color, position, size and shape of the resulting regions are extracted and are automatically mapped to appropriate intermediatelevel descriptors forming a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) in a human-centered fashion. When querying, clearly irrelevant image regions are rejected using the intermediate-level descriptors; following that, a relevance feedback mechanism employing the low-level features is invoked to produce the final query results. The proposed approach bridges the gap between keyword-based approaches, which assume the existence of rich image captions or require manual evaluation and annotation of every image of the collection, and query-by-example approaches, which assume that the user queries for images similar to one that already is at his disposal.",
"title": ""
},
{
"docid": "68c840dbfe505d735b389dd9ff7715d3",
"text": "A new design for single-feed dual-layer dual-band patch antenna with linear polarization is presented in this letter. The dual-band performance is achieved by E-shaped and U-slot patches. The proposed bands of the antenna are WLAN (2.40-2.4835 GHz) and WiMAX (3.40-3.61 GHz) bands. The fundamental modes of the two bands are TM01 mode, and the impedance bandwidths ( ) of 26.9% and 7.1% are achieved at central frequencies of 2.60 and 3.50 GHz. The peak gains of two different bands are 7.1 and 7.4 dBi, and good band isolation is achieved between the two bands. The advantages of the antenna are simple structure, wideband performance at low band, and high gains.",
"title": ""
},
{
"docid": "1eb43d21aa090151aef2ba722b6fc704",
"text": "This study was carried out to investigate pre-service teachers’ perceived ease of use, perceived usefulness, attitude and intentions towards the utilization of virtual laboratory package in teaching and learning of Nigerian secondary school physics concepts. Descriptive survey research was employed and 66 fourth and fifth year Physics education students were purposively used as research sample. Four research questions guided the study and a 16-item questionnaire was used as instrument for data collection. The questionnaire was validated by educational technology experts, physics expert and guidance and counselling experts. Pilot study was carried out on year three physics education students and a reliability coefficients ranging from 0.76 to 0.89 was obtained for each of the four sections of the questionnaire. Data collected from the administration of the research instruments were analyzed using descriptive statistics of Mean and Standard Deviation. A decision rule was set, in which, a mean score of 2.50 and above was considered Agreed while a mean score below 2.50 was considered Disagreed. Findings revealed that pre-service physics teachers perceived the virtual laboratory package easy to use and useful with mean scores of 3.18 and 3.34 respectively. Also, respondents’ attitude and intentions to use the package in teaching and learning of physics were positive with mean scores of 3.21 and 3.37 respectively. Based on these findings, it was recommended among others that administrators should equip schools with adequate Information and Communication Technology facilities that would aid students and teachers’ utilization of virtual-based learning environments in teaching and learning process.",
"title": ""
},
{
"docid": "5a73be1c8c24958779272a1190a3df20",
"text": "We study how contract element extraction can be automated. We provide a labeled dataset with gold contract element annotations, along with an unlabeled dataset of contracts that can be used to pre-train word embeddings. Both datasets are provided in an encoded form to bypass privacy issues. We describe and experimentally compare several contract element extraction methods that use manually written rules and linear classifiers (logistic regression, SVMs) with hand-crafted features, word embeddings, and part-of-speech tag embeddings. The best results are obtained by a hybrid method that combines machine learning (with hand-crafted features and embeddings) and manually written post-processing rules.",
"title": ""
},
{
"docid": "86d725fa86098d90e5e252c6f0aaab3c",
"text": "This paper illustrates the manner in which UML can be used to study mappings to different types of database systems. After introducing UML through a comparison to the EER model, UML diagrams are used to teach different approaches for mapping conceptual designs to the relational model. As we cover object-oriented and object-relational database systems, different features of UML are used over the same enterprise example to help students understand mapping alternatives for each model. Students are required to compare and contrast the mappings in each model as part of the learning process. For object-oriented and object-relational database systems, we address mappings to the ODMG and SQL99 standards in addition to specific commercial implementations.",
"title": ""
},
{
"docid": "9a7ef5c9f6ceca7a88d2351504404954",
"text": "In this paper, we propose a 3D HMM (Three-dimensional Hidden Markov Models) approach to recognizing human facial expressions and associated emotions. Human emotion is usually classified by psychologists into six categories: Happiness, Sadness, Anger, Fear, Disgust and Surprise. Further, psychologists categorize facial movements based on the muscles that produce those movements using a Facial Action Coding System (FACS). We look beyond pure muscle movements and investigate facial features – brow, mouth, nose, eye height and facial shape – as a means of determining associated emotions. Histogram of Optical Flow is used as the descriptor for extracting and describing the key features, while training and testing are performed on 3D Hidden Markov Models. Experiments on datasets show our approach is promising and robust.",
"title": ""
},
{
"docid": "f06e080b68b5c6d640e4745537610843",
"text": "Recent studies on knowledge base completion, the task of recovering missing relationships based on recorded relations, demonstrate the importance of learning embeddings from multi-step relations. However, due to the size of knowledge bases, learning multi-step relations directly on top of observed triplets could be costly. Hence, a manually designed procedure is often used when training the models. In this paper, we propose Implicit ReasoNets (IRNs), which is designed to perform multi-step inference implicitly through a controller and shared memory. Without a human-designed inference procedure, IRNs use training data to learn to perform multi-step inference in an embedding neural space through the shared memory and controller. While the inference procedure does not explicitly operate on top of observed triplets, our proposed model outperforms all previous approaches on the popular FB15k benchmark by more than 5.7%.",
"title": ""
},
{
"docid": "f291c66ebaa6b24d858103b59de792b7",
"text": "In this study, the authors investigated the hypothesis that women's sexual orientation and sexual responses in the laboratory correlate less highly than do men's because women respond primarily to the sexual activities performed by actors, whereas men respond primarily to the gender of the actors. The participants were 20 homosexual women, 27 heterosexual women, 17 homosexual men, and 27 heterosexual men. The videotaped stimuli included men and women engaging in same-sex intercourse, solitary masturbation, or nude exercise (no sexual activity); human male-female copulation; and animal (bonobo chimpanzee or Pan paniscus) copulation. Genital and subjective sexual arousal were continuously recorded. The genital responses of both sexes were weakest to nude exercise and strongest to intercourse. As predicted, however, actor gender was more important for men than for women, and the level of sexual activity was more important for women than for men. Consistent with this result, women responded genitally to bonobo copulation, whereas men did not. An unexpected result was that homosexual women responded more to nude female targets exercising and masturbating than to nude male targets, whereas heterosexual women responded about the same to both sexes at each activity level.",
"title": ""
},
{
"docid": "b53c46bc41237333f68cf96208d0128c",
"text": "Practical pattern classi cation and knowledge discovery problems require selection of a subset of attributes or features (from a much larger set) to represent the patterns to be classi ed. This paper presents an approach to the multi-criteria optimization problem of feature subset selection using a genetic algorithm. Our experiments demonstrate the feasibility of this approach for feature subset selection in the automated design of neural networks for pattern classi cation and knowledge discovery.",
"title": ""
},
{
"docid": "822fdafcb1cec1c0f54e82fb79900ff3",
"text": "Chlorophyll fluorescence imaging was used to follow infections of Nicotiana benthamiana with the hemibiotrophic fungus, Colletotrichum orbiculare. Based on Fv/Fm images, infected leaves were divided into: healthy tissue with values similar to non-inoculated leaves; water-soaked/necrotic tissue with values near zero; and non-necrotic disease-affected tissue with intermediate values, which preceded or surrounded water-soaked/necrotic tissue. Quantification of Fv/Fm images showed that there were no changes until late in the biotrophic phase when spots of intermediate Fv/Fm appeared in visibly normal tissue. Those became water-soaked approx. 24 h later and then turned necrotic. Later in the necrotrophic phase, there was a rapid increase in affected and necrotic tissue followed by a slower increase as necrotic areas merged. Treatment with the induced systemic resistance activator, 2R, 3R-butanediol, delayed affected and necrotic tissue development by approx. 24 h. Also, the halo of affected tissue was narrower indicating that plant cells retained a higher photosystem II efficiency longer prior to death. While chlorophyll fluorescence imaging can reveal much about the physiology of infected plants, this study demonstrates that it is also a practical tool for quantifying hemibiotrophic fungal infections, including affected tissue that is appears normal visually but is damaged by infection.",
"title": ""
},
{
"docid": "28c8e13252ea46d888d4d9a4dedf61a5",
"text": "It is almost cliché to say that there has been an explosion in the amount of research on leadership in a cross-cultural context. In this review, we describe major advances and emerging patterns in this research domain over the last several years. Our starting point for this update is roughly 1996–1997, since those are the dates of two important reviews of the cross-cultural leadership literature [specifically, House, Wright, and Aditya (House, R. J., Wright, N. S., & Aditya, R. N. (1997). Cross-cultural research on organizational leadership: A critical analysis and a proposed theory. In: P. C. Earley, & M. Erez (Eds.), New perspectives on international industrial/organizational psychology (pp. 535–625). San Francisco, CA) and Dorfman (Dorfman, P. W. (1996). International and cross-cultural leadership research. In: B. J. Punnett, & O. Shenkar (Eds.), Handbook for international management research, pp. 267–349, Oxford, UK: Blackwell)]. We describe the beginnings of the decline in the quest for universal leadership principles that apply equivalently across all cultures, and we focus on the increasing application of the dimensions of culture identified by Hofstede [Hofstede, G. (1980). Culture’s consequences: International differences in work-related values (Abridged ed.). Newbury Park, CA: Sage] and others to describe variation in leadership styles, practices, and preferences. We also note the emergence of the field of cross-cultural leadership as a legitimate and independent field of endeavor, as reflected in the emergence of publication outlets for this research, and the establishment of long-term multinational multi-investigator research programs on the topic. We conclude with a discussion of progress made since the two pieces that were our departure point, and of progress yet to be made. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "259e95c8d756f31408d30bbd7660eea3",
"text": "The capacity to identify cheaters is essential for maintaining balanced social relationships, yet humans have been shown to be generally poor deception detectors. In fact, a plethora of empirical findings holds that individuals are only slightly better than chance when discerning lies from truths. Here, we report 5 experiments showing that judges' ability to detect deception greatly increases after periods of unconscious processing. Specifically, judges who were kept from consciously deliberating outperformed judges who were encouraged to do so or who made a decision immediately; moreover, unconscious thinkers' detection accuracy was significantly above chance level. The reported experiments further show that this improvement comes about because unconscious thinking processes allow for integrating the particularly rich information basis necessary for accurate lie detection. These findings suggest that the human mind is not unfit to distinguish between truth and deception but that this ability resides in previously overlooked processes.",
"title": ""
},
{
"docid": "2a60bb7773d2e5458de88d2dc0e78e54",
"text": "Many system errors do not emerge unless some intricate sequence of events occurs. In practice, this means that most systems have errors that only trigger after days or weeks of execution. Model checking [4] is an effective way to find such subtle errors. It takes a simplified description of the code and exhaustively tests it on all inputs, using techniques to explore vast state spaces efficiently. Unfortunately, while model checking systems code would be wonderful, it is almost never done in practice: building models is just too hard. It can take significantly more time to write a model than it did to write the code. Furthermore, by checking an abstraction of the code rather than the code itself, it is easy to miss errors.The paper's first contribution is a new model checker, CMC, which checks C and C++ implementations directly, eliminating the need for a separate abstract description of the system behavior. This has two major advantages: it reduces the effort to use model checking, and it reduces missed errors as well as time-wasting false error reports resulting from inconsistencies between the abstract description and the actual implementation. In addition, changes in the implementation can be checked immediately without updating a high-level description.The paper's second contribution is demonstrating that CMC works well on real code by applying it to three implementations of the Ad-hoc On-demand Distance Vector (AODV) networking protocol [7]. We found 34 distinct errors (roughly one bug per 328 lines of code), including a bug in the AODV specification itself. Given our experience building systems, it appears that the approach will work well in other contexts, and especially well for other networking protocols.",
"title": ""
},
{
"docid": "59d57e31357eb72464607e89ba4ba265",
"text": "Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds promise to be for scientists an alternative to clusters, grids, and supercomputers. However, virtualization may induce significant performance penalties for the demanding scientific computing workloads. In this work we present an evaluation of the usefulness of the current cloud computing services for scientific computing. We analyze the performance of the Amazon EC2 platform using micro-benchmarks, kernels, and e-Science workloads. We also compare using long-term traces the performance characteristics and cost models of clouds with those of other platforms accessible to scientists. While clouds are still changing, our results indicate that the current cloud services need an order of magnitude in performance improvement to be useful to the scientific community. Wp 1 http://www.pds.ewi.tudelft.nl/∼iosup/ S. Ostermann et al. Wp Early Cloud Computing EvaluationWp PDS",
"title": ""
},
{
"docid": "17ab4797666afed3a37a8761fcbb0d1e",
"text": "In this paper, we propose a CPW fed triple band notch UWB antenna array with EBG structure. The major consideration in the antenna array design is the mutual coupling effect that exists within the elements. The use of Electromagnetic Band Gap structures in the antenna arrays can limit the coupling by suppresssing the surface waves. The triple band notch antenna consists of three slots which act as notch resonators for a specific band of frequencies, the C shape slot at the main radiator (WiMax-3.5GHz), a pair of CSRR structures at the ground plane(WLAN-5.8GHz) and an inverted U shaped slot in the center of the patch (Satellite Service bands-8.2GHz). The main objective is to reduce mutual coupling which in turn improves the peak realized gain, directivity.",
"title": ""
},
{
"docid": "859e5fda6de846a73c291dbe656d4137",
"text": "A platform to study ultrasound as a source for wireless energy transfer and communication for implanted medical devices is described. A tank is used as a container for a pair of electroacoustic transducers, where a control unit is fixed to one wall of the tank and a transponder can be manually moved in three axes and rotate using a mechanical system. The tank is filled with water to allow acoustic energy and data transfer, and the system is optimized to avoid parasitic effects due to cables, reflection paths and cross talk problems. A printed circuit board is developed to test energy scavenging such that enough acoustic intensity is generated by the control unit to recharge a battery loaded to the transponder. In the same manner, a second printed circuit board is fabricated to study transmission of information through acoustic waves.",
"title": ""
},
{
"docid": "065c12155991b38d36ec1e71cff60ce4",
"text": "The purpose of this chapter is to introduce, analyze, and compare the models of wheeled mobile robots (WMR) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. According to this discussion it is shown that, whatever the number and the types of the wheels, all WMR belong to only five generic classes. Different types of models are derived and compared: the posture model versus the configuration model, the kinematic model versus the dynamic model. The structural properties of these models are discussed and compared. These models as well as their properties constitute the background necessary for model-based control design. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robots and articulated robots realizations are described in more detail.",
"title": ""
}
] | scidocsrr |
bb83e8b1e9d238b4483e1dd29c62e1ab | Tangential beam IMRT versus tangential beam 3D-CRT of the chest wall in postmastectomy breast cancer patients: A dosimetric comparison | [
{
"docid": "94ceacc37c20034658dae3008ed59ab2",
"text": "BACKGROUND\nIn early breast cancer, variations in local treatment that substantially affect the risk of locoregional recurrence could also affect long-term breast cancer mortality. To examine this relationship, collaborative meta-analyses were undertaken, based on individual patient data, of the relevant randomised trials that began by 1995.\n\n\nMETHODS\nInformation was available on 42,000 women in 78 randomised treatment comparisons (radiotherapy vs no radiotherapy, 23,500; more vs less surgery, 9300; more surgery vs radiotherapy, 9300). 24 types of local treatment comparison were identified. To help relate the effect on local (ie, locoregional) recurrence to that on breast cancer mortality, these were grouped according to whether or not the 5-year local recurrence risk exceeded 10% (<10%, 17,000 women; >10%, 25,000 women).\n\n\nFINDINGS\nAbout three-quarters of the eventual local recurrence risk occurred during the first 5 years. In the comparisons that involved little (<10%) difference in 5-year local recurrence risk there was little difference in 15-year breast cancer mortality. Among the 25,000 women in the comparisons that involved substantial (>10%) differences, however, 5-year local recurrence risks were 7% active versus 26% control (absolute reduction 19%), and 15-year breast cancer mortality risks were 44.6% versus 49.5% (absolute reduction 5.0%, SE 0.8, 2p<0.00001). These 25,000 women included 7300 with breast-conserving surgery (BCS) in trials of radiotherapy (generally just to the conserved breast), with 5-year local recurrence risks (mainly in the conserved breast, as most had axillary clearance and node-negative disease) 7% versus 26% (reduction 19%), and 15-year breast cancer mortality risks 30.5% versus 35.9% (reduction 5.4%, SE 1.7, 2p=0.0002; overall mortality reduction 5.3%, SE 1.8, 2p=0.005). They also included 8500 with mastectomy, axillary clearance, and node-positive disease in trials of radiotherapy (generally to the chest wall and regional lymph nodes), with similar absolute gains from radiotherapy; 5-year local recurrence risks (mainly at these sites) 6% versus 23% (reduction 17%), and 15-year breast cancer mortality risks 54.7% versus 60.1% (reduction 5.4%, SE 1.3, 2p=0.0002; overall mortality reduction 4.4%, SE 1.2, 2p=0.0009). Radiotherapy produced similar proportional reductions in local recurrence in all women (irrespective of age or tumour characteristics) and in all major trials of radiotherapy versus not (recent or older; with or without systemic therapy), so large absolute reductions in local recurrence were seen only if the control risk was large. To help assess the life-threatening side-effects of radiotherapy, the trials of radiotherapy versus not were combined with those of radiotherapy versus more surgery. There was, at least with some of the older radiotherapy regimens, a significant excess incidence of contralateral breast cancer (rate ratio 1.18, SE 0.06, 2p=0.002) and a significant excess of non-breast-cancer mortality in irradiated women (rate ratio 1.12, SE 0.04, 2p=0.001). Both were slight during the first 5 years, but continued after year 15. The excess mortality was mainly from heart disease (rate ratio 1.27, SE 0.07, 2p=0.0001) and lung cancer (rate ratio 1.78, SE 0.22, 2p=0.0004).\n\n\nINTERPRETATION\nIn these trials, avoidance of a local recurrence in the conserved breast after BCS and avoidance of a local recurrence elsewhere (eg, the chest wall or regional nodes) after mastectomy were of comparable relevance to 15-year breast cancer mortality. Differences in local treatment that substantially affect local recurrence rates would, in the hypothetical absence of any other causes of death, avoid about one breast cancer death over the next 15 years for every four local recurrences avoided, and should reduce 15-year overall mortality.",
"title": ""
}
] | [
{
"docid": "86f0fa880f2a72cd3bf189132cc2aa44",
"text": "The advent of new technical solutions has offered a vast scope to encounter the existing challenges in tablet coating technology. One such outcome is the usage of innovative aqueous coating compositions to meet the limitations of organic based coating. The present study aimed at development of delayed release pantoprazole sodium tablets by coating with aqueous acrylic system belonging to methacrylic acid copolymer and to investigate the ability of the dosage form to protect the drug from acid milieu and to release rapidly in the duodenal pH. The core tablets were produced by direct compression using different disintegrants in variable concentrations. The physicochemical properties of all the tablets were consistent and satisfactory. Crosspovidone at 7.5% proved to be a better disintegrant with rapid disintegration with a minute, owing to its wicking properties. The optimized formulations were seal coated using HPMC dispersion to act as a barrier between the acid liable drug and enteric film coatings. The subcoating process was followed by enteric coating of tablets by the application of acryl-Eze at different theoretical weight gains. Enteric coated formulations were subjected to disintegration and dissolution tests by placing them in 0.1 N HCl for 2 h and then in pH 6.8 phosphate buffer for 1 h. The coated tablets remained static without peeling or cracking in the acid media, however instantly disintegrated in the intestinal pH. In the in vitro release studies, the optimized tablets released 0.16% in the acid media and 96% in the basic media which are well within the selected criteria. Results of the stability tests were satisfactory with the dissolution rate and assays were within acceptable limits. The results ascertained the acceptability of the aqueous based enteric coating composition for the successful development of delayed release, duodenal specific dosage forms for proton pump inhibitors.",
"title": ""
},
{
"docid": "ea1072f2972dbf15ef8c2d38704a0095",
"text": "The reliability of the microinverter is a very important feature that will determine the reliability of the ac-module photovoltaic (PV) system. Recently, many topologies and techniques have been proposed to improve its reliability. This paper presents a thorough study for different power decoupling techniques in single-phase microinverters for grid-tie PV applications. These power decoupling techniques are categorized into three groups in terms of the decoupling capacitor locations: 1) PV-side decoupling; 2) dc-link decoupling; and 3) ac-side decoupling. Various techniques and topologies are presented, compared, and scrutinized in scope of the size of decoupling capacitor, efficiency, and control complexity. Also, a systematic performance comparison is presented for potential power decoupling topologies and techniques.",
"title": ""
},
{
"docid": "1a2d9da5b42a7ae5a8dcf5fef48cfe26",
"text": "The space of bio-inspired hardware can be partitioned along three axes: phylogeny, ontogeny, and epigenesis. We refer to this as the POE model. Our Embryonics (for embryonic electronics) project is situated along the ontogenetic axis of the POE model and is inspired by the processes of molecular biology and by the embryonic development of living beings. We will describe the architecture of multicellular automata that are endowed with self-replication and self-repair properties. In the conclusion, we will present our major on-going project: a giant self-repairing electronic watch, the BioWatch, built on a new reconfigurable tissue, the electronic wall or e–wall.",
"title": ""
},
{
"docid": "da7f869037f40ab8666009d85d9540ff",
"text": "A boomerang-shaped alar base excision is described to narrow the nasal base and correct the excessive alar flare. The boomerang excision combined the external alar wedge resection with an internal vestibular floor excision. The internal excision was inclined 30 to 45 degrees laterally to form the inner limb of the boomerang. The study included 46 patients presenting with wide nasal base and excessive alar flaring. All cases were followed for a mean period of 18 months (range, 8 to 36 months). The laterally oriented vestibular floor excision allowed for maximum preservation of the natural curvature of the alar rim where it meets the nostril floor and upon its closure resulted in a considerable medialization of alar lobule, which significantly reduced the amount of alar flare and the amount of external alar excision needed. This external alar excision measured, on average, 3.8 mm (range, 2 to 8 mm), which is significantly less than that needed when a standard vertical internal excision was used ( P < 0.0001). Such conservative external excisions eliminated the risk of obliterating the natural alar-facial crease, which did not occur in any of our cases. No cases of postoperative bleeding, infection, or vestibular stenosis were encountered. Keloid or hypertrophic scar formation was not encountered; however, dermabrasion of the scars was needed in three (6.5%) cases to eliminate apparent suture track marks. The boomerang alar base excision proved to be a safe and effective technique for narrowing the nasal base and elimination of the excessive flaring and resulted in a natural, well-proportioned nasal base with no obvious scarring.",
"title": ""
},
{
"docid": "80ece123483d6de02c4e621bdb8eb0fc",
"text": "Resistive-switching memory (RRAM) based on transition metal oxides is a potential candidate for replacing Flash and dynamic random access memory in future generation nodes. Although very promising from the standpoints of scalability and technology, RRAM still has severe drawbacks in terms of understanding and modeling of the resistive-switching mechanism. This paper addresses the modeling of resistive switching in bipolar metal-oxide RRAMs. Reset and set processes are described in terms of voltage-driven ion migration within a conductive filament generated by electroforming. Ion migration is modeled by drift–diffusion equations with Arrhenius-activated diffusivity and mobility. The local temperature and field are derived from the self-consistent solution of carrier and heat conduction equations in a 3-D axis-symmetric geometry. The model accounts for set–reset characteristics, correctly describing the abrupt set and gradual reset transitions and allowing scaling projections for metal-oxide RRAM.",
"title": ""
},
{
"docid": "8e109bae5f59f84bb9b2ad88acfac446",
"text": "A proposal is made to use blockchain technology for recording contracts. A new protocol using the technology is described that makes it possible to confirm that contractor consent has been obtained and to archive the contractual document in the blockchain.",
"title": ""
},
{
"docid": "8c284159d0ba43f67c3c478763e7f200",
"text": "We develop a new graph-theoretic approach for pairwise data clustering which is motivated by the analogies between the intuitive concept of a cluster and that of a dominant set of vertices, a notion introduced here which generalizes that of a maximal complete subgraph to edge-weighted graphs. We establish a correspondence between dominant sets and the extrema of a quadratic form over the standard simplex, thereby allowing the use of straightforward and easily implementable continuous optimization techniques from evolutionary game theory. Numerical examples on various point-set and image segmentation problems confirm the potential of the proposed approach",
"title": ""
},
{
"docid": "8dc400d9745983da1e91f0cec70606c9",
"text": "Aspect-Oriented Programming (AOP) is intended to ease situations that involve many kinds of code tangling. This paper reports on a study to investigate AOP's ability to ease tangling related to exception detection and handling. We took an existing framework written in Java™, the JWAM framework, and partially reengineered its exception detection and handling aspects using AspectJ™, an aspect-oriented programming extension to Java.\nWe found that AspectJ supported implementations that drastically reduced the portion of the code related to exception detection and handling. In one scenario, we were able to reduce that code by a factor of 4. We also found that, with respect to the original implementation in plain Java, AspectJ provided better support for different configurations of exceptional behaviors, more tolerance for changes in the specifications of exceptional behaviors, better support for incremental development, better reuse, automatic enforcement of contracts in applications that use the framework, and cleaner program texts. We also found some weaknesses of AspectJ that should be addressed in the future.",
"title": ""
},
{
"docid": "81385958cac7df4cc51b35762e6c2806",
"text": "DDoS attacks remain a serious threat not only to the edge of the Internet but also to the core peering links at Internet Exchange Points (IXPs). Currently, the main mitigation technique is to blackhole traffic to a specific IP prefix at upstream providers. Blackholing is an operational technique that allows a peer to announce a prefix via BGP to another peer, which then discards traffic destined for this prefix. However, as far as we know there is only anecdotal evidence of the success of blackholing. Largely unnoticed by research communities, IXPs have deployed blackholing as a service for their members. In this first-of-its-kind study, we shed light on the extent to which blackholing is used by the IXP members and what effect it has on traffic. Within a 12 week period we found that traffic to more than 7, 864 distinct IP prefixes was blackholed by 75 ASes. The daily patterns emphasize that there are not only a highly variable number of new announcements every day but, surprisingly, there are a consistently high number of announcements (> 1000). Moreover, we highlight situations in which blackholing succeeds in reducing the DDoS attack traffic.",
"title": ""
},
{
"docid": "8da0d4884947d973a9121ea8f726ea61",
"text": "Soil and water pollution is becoming one of major burden in modern Indian society due to industrialization. Though there are many methods to remove the heavy metal from soil and water pollution but biosorption is one of the best scientific methods to remove heavy metal from water sample by using biomolecules and bacteria. Biosorbent have the ability to bind the heavy metal and therefore can remove from polluted water. Currently, we have taken the water sample from Ballendur Lake, Bangalore. Which is highly polluted due to industries besides this lake. This sample of water was serially diluted to 10-7. 10-4 and 10-5 diluted sample was allowed to stand in Tryptone Glucose Extract agar media mixed with the different concentrations of lead acetate for 24 hours. Microflora growth was observed. Then we cultured in different temperature, pH and different age of culture media. Finally, we did the biochemical test to identify the bacteria isolate and we found till genus level, it could be either Streptococcus sp. or Enterococcus sp.",
"title": ""
},
{
"docid": "2534ef0135eaba7e85a44c81c637adae",
"text": "k e 9 { Vol. 32, No. 1 2006 O 1 2 ACTA AUTOMATICA SINICA January, 2006 82 7? CPG , ? \". C4; 1) INH1, 3, 4 JKF1, 2 G D1 LME1 1(W\" z. jd8,EYpz\\\\ 110016) 2(u = Hz COE(Center of Excellence) <pE g. 525-8577 u ) ( Hz 110168) 4(W\" z. y . s 100039) (E-mail: luzhl@sia.cn) < 5 ~ P}*}nFZqTf L4f1℄ ~< Q CPG \\? 4 \\( }nFZq uD6?j }nFZq?j |= E( CPG ?jmy 4f QT br 3<! E(ÆX FT A QT CPG -!adyu r1 <(m1T zy _ } }nFZq X ? r Z ~< Q y 4f & TP24",
"title": ""
},
{
"docid": "104c71324594c907f87d483c8c222f0f",
"text": "Operational controls are designed to support the integration of wind and solar power within microgrids. An aggregated model of renewable wind and solar power generation forecast is proposed to support the quantification of the operational reserve for day-ahead and real-time scheduling. Then, a droop control for power electronic converters connected to battery storage is developed and tested. Compared with the existing droop controls, it is distinguished in that the droop curves are set as a function of the storage state-of-charge (SOC) and can become asymmetric. The adaptation of the slopes ensures that the power output supports the terminal voltage while at the same keeping the SOC within a target range of desired operational reserve. This is shown to maintain the equilibrium of the microgrid's real-time supply and demand. The controls are implemented for the special case of a dc microgrid that is vertically integrated within a high-rise host building of an urban area. Previously untapped wind and solar power are harvested on the roof and sides of a tower, thereby supporting delivery to electric vehicles on the ground. The microgrid vertically integrates with the host building without creating a large footprint.",
"title": ""
},
{
"docid": "67bef3bbd769010e91548649eae454fa",
"text": "As networked and computer technologies continue to pervade all aspects of our lives, the threat from cyber attacks has also increased. However, detecting attacks, much less predicting them in advance, is a non-trivial task due to the anonymity of cyber attackers and the ambiguity of network data collected within an organization; often, by the time an attack pattern is recognized, the damage has already been done. Evidence suggests that the public discourse in external sources, such as news and social media, is often correlated with the occurrence of larger phenomena, such as election results or violent attacks. In this paper, we propose an approach that uses sentiment polarity as a sensor to analyze the social behavior of groups on social media as an indicator of cyber attack behavior. We developed an unsupervised sentiment prediction method that uses emotional signals to enhance the sentiment signal from sparse textual indicators. To explore the efficacy of sentiment polarity as an indicator of cyberattacks, we performed experiments using real-world data from Twitter that corresponds to attacks by a well-known hacktivist group.",
"title": ""
},
{
"docid": "a016fb3b7e5c4bcf386d775c7c61a887",
"text": "How do journalists mark quoted content as certain or uncertain, and how do readers interpret these signals? Predicates such as thinks, claims, and admits offer a range of options for framing quoted content according to the author’s own perceptions of its credibility. We gather a new dataset of direct and indirect quotes from Twitter, and obtain annotations of the perceived certainty of the quoted statements. We then compare the ability of linguistic and extra-linguistic features to predict readers’ assessment of the certainty of quoted content. We see that readers are indeed influenced by such framing devices — and we find no evidence that they consider other factors, such as the source, journalist, or the content itself. In addition, we examine the impact of specific framing devices on perceptions of credibility.",
"title": ""
},
{
"docid": "f81dd0c86a7b45e743e4be117b4030c2",
"text": "Stock market prediction is of great importance for financial analysis. Traditionally, many studies only use the news or numerical data for the stock market prediction. In the recent years, in order to explore their complementary, some studies have been conducted to equally treat dual sources of information. However, numerical data often play a much more important role compared with the news. In addition, the existing simple combination cannot exploit their complementarity. In this paper, we propose a numerical-based attention (NBA) method for dual sources stock market prediction. Our major contributions are summarized as follows. First, we propose an attention-based method to effectively exploit the complementarity between news and numerical data in predicting the stock prices. The stock trend information hidden in the news is transformed into the importance distribution of numerical data. Consequently, the news is encoded to guide the selection of numerical data. Our method can effectively filter the noise and make full use of the trend information in news. Then, in order to evaluate our NBA model, we collect news corpus and numerical data to build three datasets from two sources: the China Security Index 300 (CSI300) and the Standard & Poor’s 500 (S&P500). Extensive experiments are conducted, showing that our NBA is superior to previous models in dual sources stock price prediction.",
"title": ""
},
{
"docid": "faa3d0432cbade209fa876240c5db4c0",
"text": "BACKGROUND\nDespite the clinical importance of atrial fibrillation (AF), the development of chronic nonvalvular AF models has been difficult. Animal models of sustained AF have been developed primarily in the short-term setting. Recently, models of chronic ventricular myopathy and fibrillation have been developed after several weeks of continuous rapid ventricular pacing. We hypothesized that chronic rapid atrial pacing would lead to atrial myopathy, yielding a reproducible model of sustained AF.\n\n\nMETHODS AND RESULTS\nTwenty-two halothane-anesthetized mongrel dogs underwent insertion of a transvenous lead at the right atrial appendage that was continuously paced at 400 beats per minute for 6 weeks. Two-dimensional echocardiography was performed in 11 dogs to assess the effects of rapid atrial pacing on atrial size. Atrial vulnerability was defined as the ability to induce sustained repetitive atrial responses during programmed electrical stimulation and was assessed by extrastimulus and burst-pacing techniques. Effective refractory period (ERP) was measured at two endocardial sites in the right atrium. Sustained AF was defined as AF > or = 15 minutes. In animals with sustained AF, 10 quadripolar epicardial electrodes were surgically attached to the right and left atria. The local atrial fibrillatory cycle length (AFCL) was measured in a 20-second window, and the mean AFCL was measured at each site. Marked biatrial enlargement was documented; after 6 weeks of continuous rapid atrial pacing, the left atrium was 7.8 +/- 1 cm2 at baseline versus 11.3 +/- 1 cm2 after pacing, and the right atrium was 4.3 +/- 0.7 cm2 at baseline versus 7.2 +/- 1.3 cm2 after pacing. An increase in atrial area of at least 40% was necessary to induce sustained AF and was strongly correlated with the inducibility of AF (r = .87). Electron microscopy of atrial tissue demonstrated structural changes that were characterized by an increase in mitochondrial size and number and by disruption of the sarcoplasmic reticulum. After 6 weeks of continuous rapid atrial pacing, sustained AF was induced in 18 dogs (82%) and nonsustained AF was induced in 2 dogs (9%). AF occurred spontaneously in 4 dogs (18%). Right atrial ERP, measured at cycle lengths of 400 and 300 milliseconds at baseline, was significantly shortened after pacing, from 150 +/- 8 to 127 +/- 10 milliseconds and from 147 +/- 11 to 123 +/- 12 milliseconds, respectively (P < .001). This finding was highly predictive of inducibility of AF (90%). Increased atrial area (40%) and ERP shortening were highly predictive for the induction of sustained AF (88%). Local epicardial ERP correlated well with local AFCL (R2 = .93). Mean AFCL was significantly shorter in the left atrium (81 +/- 8 milliseconds) compared with the right atrium 94 +/- 9 milliseconds (P < .05). An area in the posterior left atrium was consistently found to have a shorter AFCL (74 +/- 5 milliseconds). Cryoablation of this area was attempted in 11 dogs. In 9 dogs (82%; mean, 9.0 +/- 4.0; range, 5 to 14), AF was terminated and no longer induced after serial cryoablation.\n\n\nCONCLUSIONS\nSustained AF was readily inducible in most dogs (82%) after rapid atrial pacing. This model was consistently associated with biatrial myopathy and marked changes in atrial vulnerability. An area in the posterior left atrium was uniformly shown to have the shortest AFCL. The results of restoration of sinus rhythm and prevention of inducibility of AF after cryoablation of this area of the left atrium suggest that this area may be critical in the maintenance of AF in this model.",
"title": ""
},
{
"docid": "7323cf16224197b312d1a4c7ff4168ea",
"text": "It is well known that animals can use neural and sensory feedback via vision, tactile sensing, and echolocation to negotiate obstacles. Similarly, most robots use deliberate or reactive planning to avoid obstacles, which relies on prior knowledge or high-fidelity sensing of the environment. However, during dynamic locomotion in complex, novel, 3D terrains, such as a forest floor and building rubble, sensing and planning suffer bandwidth limitation and large noise and are sometimes even impossible. Here, we study rapid locomotion over a large gap-a simple, ubiquitous obstacle-to begin to discover the general principles of the dynamic traversal of large 3D obstacles. We challenged the discoid cockroach and an open-loop six-legged robot to traverse a large gap of varying length. Both the animal and the robot could dynamically traverse a gap as large as one body length by bridging the gap with its head, but traversal probability decreased with gap length. Based on these observations, we developed a template that accurately captured body dynamics and quantitatively predicted traversal performance. Our template revealed that a high approach speed, initial body pitch, and initial body pitch angular velocity facilitated dynamic traversal, and successfully predicted a new strategy for using body pitch control that increased the robot's maximal traversal gap length by 50%. Our study established the first template of dynamic locomotion beyond planar surfaces, and is an important step in expanding terradynamics into complex 3D terrains.",
"title": ""
},
{
"docid": "4affe8335240844414a51355593bfbe0",
"text": "— This paper reviews and extends some recent results on the multivariate fractional Brownian motion (mfBm) and its increment process. A characterization of the mfBm through its covariance function is obtained. Similarly, the correlation and spectral analyses of the increments are investigated. On the other hand we show that (almost) all mfBm’s may be reached as the limit of partial sums of (super)linear processes. Finally, an algorithm to perfectly simulate the mfBm is presented and illustrated by some simulations. Résumé (Propriétés du mouvement brownien fractionnaire multivarié) Cet article constitue une synthèse des propriétés du mouvement brownien fractionnaire multivarié (mBfm) et de ses accroissements. Différentes caractérisations du mBfm sont présentées à partir soit de la fonction de covariance, soit de représentations intégrales. Nous étudions aussi les propriétés temporelles et spectrales du processus des accroissements. D’autre part, nous montrons que (presque) tous les mBfm peuvent être atteints comme la limite (au sens de la convergence faible) des sommes partielles de processus (super)linéaires. Enfin, un algorithme de simulation exacte est présenté et quelques simulations illustrent les propriétés du mBfm.",
"title": ""
},
{
"docid": "c35db6f50a6ca89d45172faf0332946a",
"text": "Mobile commerce had been expected to become a major force of e-commerce in the 21st century. However, the rhetoric has far exceeded the reality so far. While academics and practitioners have presented many views about the lack of rapid growth of mobile commerce, we submit that the anticipated mobile commerce take-off hinges on the emergence of a few killer apps. After reviewing the recent history of technologies that have dramatically changed our way of life and work, we propose a set of criteria for identifying and evaluating killer apps. From this vantage point, we argue that mobile payment and banking are the most likely candidates for the killer apps that could bring the expectation of a world of ubiquitous mobile commerce to fruition. Challenges and opportunities associated with this argument are discussed.",
"title": ""
},
{
"docid": "3c577fcd0d0876af4aa031affa3bd168",
"text": "Domain-specific Internet of Things (IoT) applications are becoming more and more popular. Each of these applications uses their own technologies and terms to describe sensors and their measurements. This is a difficult task to help users build generic IoT applications to combine several domains. To explicitly describe sensor measurements in uniform way, we propose to enrich them with semantic web technologies. Domain knowledge is already defined in more than 200 ontology and sensor-based projects that we could reuse to build cross-domain IoT applications. There is a huge gap to reason on sensor measurements without a common nomenclature and best practices to ease the automation of generic IoT applications. We present our Machine-to-Machine Measurement (M3) framework and share lessons learned to improve existing standards such as oneM2M, ETSI M2M, W3C Web of Things and W3C Semantic Sensor Network.",
"title": ""
}
] | scidocsrr |
8b83b7be2115801005e3fd42ff9ec760 | Music-evoked nostalgia: affect, memory, and personality. | [
{
"docid": "4b04a4892ef7c614b3bf270f308e6984",
"text": "One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n=354) were conducted to compile a list of music-relevant emotion terms and to study the frequency of both felt and perceived emotions across 5 groups of listeners with distinct music preferences. Emotional responses varied greatly according to musical genre and type of response (felt vs. perceived). Study 3 (n=801)--a field study carried out during a music festival--examined the structure of music-induced emotions via confirmatory factor analysis of emotion ratings, resulting in a 9-factorial model of music-induced emotions. Study 4 (n=238) replicated this model and found that it accounted for music-elicited emotions better than the basic emotion and dimensional emotion models. A domain-specific device to measure musically induced emotions is introduced--the Geneva Emotional Music Scale.",
"title": ""
}
] | [
{
"docid": "3f50585a983c91575c38c52219091c63",
"text": "Most fingerprint matching systems are based on matching minutia points between two fingerprint images. Each minutia is represented by a fixed number of attributes such as the location, orientation, type and other local information. A hard decision is made on the match between a pair of minutiae based on the similarity of these attributes. In this paper, we present a minutiae matching algorithm that uses spatial correlation of regions around the minutiae to ascertain the quality of each minutia match. The proposed algorithm has two main advantages. Since the gray level values of the pixels around a minutia point retain most of the local information, spatial correlation provides an accurate measure of the similarity between minutia regions. Secondly, no hard decision is made on the correspondence between a minutia pair. Instead the quality of all the minutiae matches are accumulated to arrive at the final matching score between the template and query fingerprint impressions. Experiments on a database of 160 users (4 impressions per finger) indicate that the proposed algorithm serves well to complement the 2D dynamic programming based minutiae matching technique; a combination of these two methods can reduce the false non-match rate by approximately 3.5% at a false match rate of 0.1%.",
"title": ""
},
{
"docid": "e82d3eedc733d536c49a69856ad66e00",
"text": "Artificial neural networks, trained only on sample deals, without presentation of any human knowledge or even rules of the game, are used to estimate the number of tricks to be taken by one pair of bridge players in the so-called double dummy bridge problem (DDBP). Four representations of a deal in the input layer were tested leading to significant differences in achieved results. In order to test networks' abilities to extract knowledge from sample deals, experiments with additional inputs representing estimators of hand's strength used by humans were also performed. The superior network trained solely on sample deals outperformed all other architectures, including those using explicit human knowledge of the game of bridge. Considering the suit contracts, this network, in a sample of 100 000 testing deals, output a perfect answer in 53.11% of the cases and only in 3.52% of them was mistaken by more than one trick. The respective figures for notrump contracts were equal to 37.80% and 16.36%. The above results were compared with the ones obtained by 24 professional human bridge players-members of The Polish Bridge Union-on test sets of sizes between 27 and 864 deals per player (depending on player's time availability). In case of suit contracts, the perfect answer was obtained in 53.06% of the testing deals for ten upper-classified players and in 48.66% of them, for the remaining 14 participants of the experiment. For the notrump contracts, the respective figures were equal to 73.68% and 60.78%. Except for checking the ability of neural networks in solving the DDBP, the other goal of this research was to analyze connection weights in trained networks in a quest for weights' patterns that are explainable by experienced human bridge players. Quite surprisingly, several such patterns were discovered (e.g., preference for groups of honors, drawing special attention to Aces, favoring cards from a trump suit, gradual importance of cards in one suit-from two to the Ace, etc.). Both the numerical figures and weight patterns are stable and repeatable in a sample of neural architectures (differing only by randomly chosen initial weights). In summary, the piece of research described in this paper provides a detailed comparison between various data representations of the DDBP solved by neural networks. On a more general note, this approach can be extended to a certain class of binary classification problems.",
"title": ""
},
{
"docid": "22f61d8bab9ba3b89b9ce23d5ee2ef04",
"text": "Images of female scientists and engineers in popular$lms convey cultural and social assumptions about the role of women in science, engineering, and technology (SET). This study analyzed cultural representations of gender conveyed through images offemale scientists andengineers in popularjilms from 1991 to 2001. While many of these depictions of female scientists and engineers emphasized their appearance and focused on romance, most depictions also presented female scientists and engineers in professional positions of high status. Other images that showed the fernale scientists and engineers' interactions with male colleagues, ho~vevel; reinforced traditional social and cultural assumptions about the role of women in SET through overt and subtle forms of stereotyping. This article explores the sign$cance of thesejindings fordevelopingprograms to change girls'perceptions of scientists and engineers and attitudes toward SET careers.",
"title": ""
},
{
"docid": "a1fcf0d2b9a619c0a70b210c70cf4bfd",
"text": "This paper demonstrates a reliable navigation of a mobile robot in outdoor environment. We fuse differential GPS and odometry data using the framework of extended Kalman filter to localize a mobile robot. And also, we propose an algorithm to detect curbs through the laser range finder. An important feature of road environment is the existence of curbs. The mobile robot builds the map of the curbs of roads and the map is used for tracking and localization. The navigation system for the mobile robot consists of a mobile robot and a control station. The mobile robot sends the image data from a camera to the control station. The control station receives and displays the image data and the teleoperator commands the mobile robot based on the image data. Since the image data does not contain enough data for reliable navigation, a hybrid strategy for reliable mobile robot in outdoor environment is suggested. When the mobile robot is faced with unexpected obstacles or the situation that, if it follows the command, it can happen to collide, it sends a warning message to the teleoperator and changes the mode from teleoperated to autonomous to avoid the obstacles by itself. After avoiding the obstacles or the collision situation, the mode of the mobile robot is returned to teleoperated mode. We have been able to confirm that the appropriate change of navigation mode can help the teleoperator perform reliable navigation in outdoor environment through experiments in the road.",
"title": ""
},
{
"docid": "1bf462c3645458c0bd2e88c237a885f1",
"text": "OBJECTIVE\nUsing a new construct, job embeddedness, from the business management literature, this study first examines its value in predicting employee retention in a healthcare setting and second, assesses whether the factors that influence the retention of nurses are systematically different from those influencing other healthcare workers.\n\n\nBACKGROUND\nThe shortage of skilled healthcare workers makes it imperative that healthcare providers develop effective recruitment and retention plans. With nursing turnover averaging more than 20% a year and competition to hire new nurses fierce, many administrators rightly question whether they should develop specialized plans to recruit and retain nurses.\n\n\nMETHODS\nA longitudinal research design was employed to assess the predictive validity of the job embeddedness concept. At time 1, surveys were mailed to a random sample of 500 employees of a community-based hospital in the Northwest region of the United States. The survey assessed personal characteristics, job satisfaction, organizational commitment, job embeddedness, job search, perceived alternatives, and intent to leave. One year later (time 2) the organization provided data regarding voluntary leavers from the hospital.\n\n\nRESULTS\nHospital employees returned 232 surveys, yielding a response rate of 46.4 %. The results indicate that job embeddedness predicted turnover over and beyond a combination of perceived desirability of movement measures (job satisfaction, organizational commitment) and perceived ease of movement measures (job alternatives, job search). Thus, job embeddedness assesses new and meaningful variance in turnover in excess of that predicted by the major variables included in almost all the major models of turnover.\n\n\nCONCLUSIONS\nThe findings suggest that job embeddedness is a valuable lens through which to evaluate employee retention in healthcare organizations. Further, the levers for influencing retention are substantially similar for nurses and other healthcare workers. Implications of these findings and recommendations for recruitment and retention policy development are presented.",
"title": ""
},
{
"docid": "c694936a9b8f13654d06b72c077ed8f4",
"text": "Druid is an open source data store designed for real-time exploratory analytics on large data sets. The system combines a column-oriented storage layout, a distributed, shared-nothing architecture, and an advanced indexing structure to allow for the arbitrary exploration of billion-row tables with sub-second latencies. In this paper, we describe Druid’s architecture, and detail how it supports fast aggregations, flexible filters, and low latency data ingestion.",
"title": ""
},
{
"docid": "604362129b2ed5510750cc161cf54bbf",
"text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and speed are also important concerns. These are the two main characteristics that differentiate one encryption algorithm from another. This paper provides the performance comparison between four of the most commonly used encryption algorithms: DES(Data Encryption Standard), 3DES(Triple DES), BLOWFISH and AES (Rijndael). The comparison has been conducted by running several setting to process different sizes of data blocks to evaluate the algorithms encryption and decryption speed. Based on the performance analysis of these algorithms under different hardware and software platform, it has been concluded that the Blowfish is the best performing algorithm among the algorithms under the security against unauthorized attack and the speed is taken into consideration.",
"title": ""
},
{
"docid": "4d8335fa722e1851536182d5657ab738",
"text": "Location-aware mobile applications have become extremely common, with a recent wave of mobile dating applications that provide relatively sparse profiles to connect nearby individuals who may not know each other for immediate social or sexual encounters. These applications have become particularly popular among men who have sex with men (MSM) and raise a range of questions about self-presentation, visibility to others, and impression formation, as traditional geographic boundaries and social circles are crossed. In this paper we address two key questions around how people manage potentially stigmatized identities in using these apps and what types of information they use to self-present in the absence of a detailed profile or rich social cues. To do so, we draw on profile data observed in twelve locations on Grindr, a location-aware social application for MSM. Results suggest clear use of language to manage stigma associated with casual sex, and that users draw regularly on location information and other descriptive language to present concisely to others nearby.",
"title": ""
},
{
"docid": "81cc7e40bd2b2b13a026022148e3c7d1",
"text": "BACKGROUND\nThe long-term treatment of Parkinson disease (PD) may be complicated by the development of levodopa-induced dyskinesia. Clinical and animal model data support the view that modulation of cannabinoid function may exert an antidyskinetic effect. The authors conducted a randomized, double-blind, placebo-controlled crossover trial to examine the hypothesis that cannabis may have a beneficial effect on dyskinesia in PD.\n\n\nMETHODS\nA 4-week dose escalation study was performed to assess the safety and tolerability of cannabis in six PD patients with levodopa-induced dyskinesia. Then a randomized placebo-controlled crossover study (RCT) was performed, in which 19 PD patients were randomized to receive oral cannabis extract followed by placebo or vice versa. Each treatment phase lasted for 4 weeks with an intervening 2-week washout phase. The primary outcome measure was a change in Unified Parkinson's Disease Rating Scale (UPDRS) (items 32 to 34) dyskinesia score. Secondary outcome measures included the Rush scale, Bain scale, tablet arm drawing task, and total UPDRS score following a levodopa challenge, as well as patient-completed measures of a dyskinesia activities of daily living (ADL) scale, the PDQ-39, on-off diaries, and a range of category rating scales.\n\n\nRESULTS\nSeventeen patients completed the RCT. Cannabis was well tolerated, and had no pro- or antiparkinsonian action. There was no evidence for a treatment effect on levodopa-induced dyskinesia as assessed by the UPDRS, or any of the secondary outcome measures.\n\n\nCONCLUSIONS\nOrally administered cannabis extract resulted in no objective or subjective improvement in dyskinesias or parkinsonism.",
"title": ""
},
{
"docid": "349df0d3c48b6c1b6fcad1935f5e1e0a",
"text": "Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.",
"title": ""
},
{
"docid": "8056b29e7b39dee06f04b738807a53f9",
"text": "This paper proposes a novel topology of a multiport DC/DC converter composed of an H-bridge inverter, a high-frequency galvanic isolation transformer, and a combined circuit with a current-doubler and a buck chopper. The topology has lower conduction loss by multiple current paths and smaller output capacitors by means of an interleave operation. Results of computer simulations and experimental tests show proper operations and feasibility of the proposed strategy.",
"title": ""
},
{
"docid": "fe38de8c129845b86ee0ec4acf865c14",
"text": "0 7 4 0 7 4 5 9 / 0 2 / $ 1 7 . 0 0 © 2 0 0 2 I E E E McDonald’s develop product lines. But software product lines are a relatively new concept. They are rapidly emerging as a practical and important software development paradigm. A product line succeeds because companies can exploit their software products’ commonalities to achieve economies of production. The Software Engineering Institute’s (SEI) work has confirmed the benefits of pursuing this approach; it also found that doing so is both a technical and business decision. To succeed with software product lines, an organization must alter its technical practices, management practices, organizational structure and personnel, and business approach.",
"title": ""
},
{
"docid": "d1ad10c873fd5a02d1ce072b4ffc788c",
"text": "Zero-shot learning for visual recognition, e.g., object and action recognition, has recently attracted a lot of attention. However, it still remains challenging in bridging the semantic gap between visual features and their underlying semantics and transferring knowledge to semantic categories unseen during learning. Unlike most of the existing zero-shot visual recognition methods, we propose a stagewise bidirectional latent embedding framework of two subsequent learning stages for zero-shot visual recognition. In the bottom–up stage, a latent embedding space is first created by exploring the topological and labeling information underlying training data of known classes via a proper supervised subspace learning algorithm and the latent embedding of training data are used to form landmarks that guide embedding semantics underlying unseen classes into this learned latent space. In the top–down stage, semantic representations of unseen-class labels in a given label vocabulary are then embedded to the same latent space to preserve the semantic relatedness between all different classes via our proposed semi-supervised Sammon mapping with the guidance of landmarks. Thus, the resultant latent embedding space allows for predicting the label of a test instance with a simple nearest-neighbor rule. To evaluate the effectiveness of the proposed framework, we have conducted extensive experiments on four benchmark datasets in object and action recognition, i.e., AwA, CUB-200-2011, UCF101 and HMDB51. The experimental results under comparative studies demonstrate that our proposed approach yields the state-of-the-art performance under inductive and transductive settings.",
"title": ""
},
{
"docid": "223d5658dee7ba628b9746937aed9bb3",
"text": "A low-power receiver with a one-tap data and edge decision-feedback equalizer (DFE) and a clock recovery circuit is presented. The receiver employs analog adders for the tap-weight summation in both the data and the edge path to simultaneously optimize both the voltage and timing margins. A switched-capacitor input stage allows the receiver to be fully compatible with near-GND input levels without extra level conversion circuits. Furthermore, the critical path of the DFE is simplified to relax the timing margin. Fabricated in the 65-nm CMOS technology, a prototype DFE receiver shows that the data-path DFE extends the voltage and timing margins from 40 mVpp and 0.3 unit interval (UI), respectively, to 70 mVpp and 0.6 UI, respectively. Likewise, the edge-path equalizer reduces the uncertain sampling region (the edge region), which results in 17% reduction of the recovered clock jitter. The DFE core, including adders and samplers, consumes 1.1 mW from a 1.2-V supply while operating at 6.4 Gb/s.",
"title": ""
},
{
"docid": "54032bb625ea3c4bc8cd408c4f9f0324",
"text": "This study integrates an ecological perspective and trauma theory in proposing a model of the effects of domestic violence on women's parenting and children's adjustment. One hundred and twenty women and their children between the ages of 7 and 12 participated. Results supported an ecological model of the impact of domestic violence on women and children. The model predicted 40% of the variance in children's adjustment, 8% of parenting style, 43% of maternal psychological functioning, and 23% of marital satisfaction, using environmental factors such as social support, negative life events, and maternal history of child abuse. Overall, results support the ecological framework and trauma theory in understanding the effects of domestic violence on women and children. Rather than focusing on internal pathology, behavior is seen to exist on a continuum influenced heavily by the context in which the person is developing.",
"title": ""
},
{
"docid": "00b8207e783aed442fc56f7b350307f6",
"text": "A mathematical tool to build a fuzzy model of a system where fuzzy implications and reasoning are used is presented. The premise of an implication is the description of fuzzy subspace of inputs and its consequence is a linear input-output relation. The method of identification of a system using its input-output data is then shown. Two applications of the method to industrial processes are also discussed: a water cleaning process and a converter in a steel-making process.",
"title": ""
},
{
"docid": "5621d7df640dbe3d757ebb600486def9",
"text": "Dynamic spectrum access is the key to solving worldwide spectrum shortage. The open wireless medium subjects DSA systems to unauthorized spectrum use by illegitimate users. This paper presents SpecGuard, the first crowdsourced spectrum misuse detection framework for DSA systems. In SpecGuard, a transmitter is required to embed a spectrum permit into its physical-layer signals, which can be decoded and verified by ubiquitous mobile users. We propose three novel schemes for embedding and detecting a spectrum permit at the physical layer. Detailed theoretical analyses, MATLAB simulations, and USRP experiments confirm that our schemes can achieve correct, low-intrusive, and fast spectrum misuse detection.",
"title": ""
},
{
"docid": "37ba886ef73a8d35b4e9a4ae5dfa68bf",
"text": "Owe to the rapid development of deep neural network (DNN) techniques and the emergence of large scale face databases, face recognition has achieved a great success in recent years. During the training process of DNN, the face features and classification vectors to be learned will interact with each other, while the distribution of face features will largely affect the convergence status of network and the face similarity computing in test stage. In this work, we formulate jointly the learning of face features and classification vectors, and propose a simple yet effective centralized coordinate learning (CCL) method, which enforces the features to be dispersedly spanned in the coordinate space while ensuring the classification vectors to lie on a hypersphere. An adaptive angular margin is further proposed to enhance the discrimination capability of face features. Extensive experiments are conducted on six face benchmarks, including those have large age gap and hard negative samples. Trained only on the small-scale CASIA Webface dataset with 460K face images from about 10K subjects, our CCL model demonstrates high effectiveness and generality, showing consistently competitive performance across all the six benchmark databases.",
"title": ""
},
{
"docid": "0cf25d7f955a2eb7b015b4de91bb4524",
"text": "We describe the University of Maryland machine translation systems submitted to the IWSLT 2015 French-English and Vietnamese-English tasks. We built standard hierarchical phrase-based models, extended in two ways: (1) we applied novel data selection techniques to select relevant information from the large French-English training corpora, and (2) we experimented with neural language models. Our FrenchEnglish system compares favorably against the organizers’ baseline, while the Vietnamese-English one does not, indicating the difficulty of the translation scenario.",
"title": ""
}
] | scidocsrr |
1b1fa09d4579496218b4780fbdfc8d38 | JSAT: Java Statistical Analysis Tool, a Library for Machine Learning | [
{
"docid": "af56806a30f708cb0909998266b4d8c1",
"text": "There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-m l is an open source library, targeted at both engineers and research scientists, which aims to pro vide a similarly rich environment for developing machine learning software in the C++ language. T owards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS supp ort. It also houses implementations of algorithms for performing inference in Bayesian networks a nd kernel-based methods for classification, regression, clustering, anomaly detection, and fe atur ranking. To enable easy use of these tools, the entire library has been developed with contract p rogramming, which provides complete and precise documentation as well as powerful debugging too ls.",
"title": ""
}
] | [
{
"docid": "304b65a4248c63e0217bdf89b1242393",
"text": "In a very competitive mobile telecommunication business environment, marketing managers need a business intelligence model that allows them to maintain an optimal (at least a near optimal) level of churners very effectively and efficiently while minimizing the costs throughout their marketing programs. As a first step toward optimal churn management program for marketing managers, this paper focuses on building an accurate and concise predictive model for the purpose of churn prediction utilizing a Partial Least Square (PLS)-based methodology on highly correlated data sets among variables. A preliminary experiment demonstrates that the presented model provides more accurate performance than traditional prediction models and identifies key variables to better understand churning behaviors. Further, a set of simple churn marketing programs--device management, overage management, and complaint management strategies—is presented and discussed.",
"title": ""
},
{
"docid": "777c65f8123dd718d6faefaa1fec0b15",
"text": "BACKGROUND\nProcessed meat and fish have been shown to be associated with the risk of advanced prostate cancer, but few studies have examined diet after prostate cancer diagnosis and risk of its progression.\n\n\nOBJECTIVE\nWe examined the association between postdiagnostic consumption of processed and unprocessed red meat, fish, poultry, and eggs and the risk of prostate cancer recurrence or progression.\n\n\nDESIGN\nWe conducted a prospective study in 1294 men with prostate cancer, without recurrence or progression as of 2004-2005, who were participating in the Cancer of the Prostate Strategic Urologic Research Endeavor and who were followed for an average of 2 y.\n\n\nRESULTS\nWe observed 127 events (prostate cancer death or metastases, elevated prostate-specific antigen concentration, or secondary treatment) during 2610 person-years. Intakes of processed and unprocessed red meat, fish, total poultry, and skinless poultry were not associated with prostate cancer recurrence or progression. Greater consumption of eggs and poultry with skin was associated with 2-fold increases in risk in a comparison of extreme quantiles: eggs [hazard ratio (HR): 2.02; 95% CI: 1.10, 3.72; P for trend = 0.05] and poultry with skin (HR: 2.26; 95% CI: 1.36, 3.76; P for trend = 0.003). An interaction was observed between prognostic risk at diagnosis and poultry. Men with high prognostic risk and a high poultry intake had a 4-fold increased risk of recurrence or progression compared with men with low/intermediate prognostic risk and a low poultry intake (P for interaction = 0.003).\n\n\nCONCLUSIONS\nOur results suggest that the postdiagnostic consumption of processed or unprocessed red meat, fish, or skinless poultry is not associated with prostate cancer recurrence or progression, whereas consumption of eggs and poultry with skin may increase the risk.",
"title": ""
},
{
"docid": "0a9aebb4725ad3f5c1f613fc3b8a0782",
"text": "In this work we present Neural Decision Forests, a novel approach to jointly tackle data representation- and discriminative learning within randomized decision trees. Recent advances of deep learning architectures demonstrate the power of embedding representation learning within the classifier -- An idea that is intuitively supported by the hierarchical nature of the decision forest model where the input space is typically left unchanged during training and testing. We bridge this gap by introducing randomized Multi- Layer Perceptrons (rMLP) as new split nodes which are capable of learning non-linear, data-specific representations and taking advantage of them by finding optimal predictions for the emerging child nodes. To prevent overfitting, we i) randomly select the image data fed to the input layer, ii) automatically adapt the rMLP topology to meet the complexity of the data arriving at the node and iii) introduce an l1-norm based regularization that additionally sparsifies the network. The key findings in our experiments on three different semantic image labelling datasets are consistently improved results and significantly compressed trees compared to conventional classification trees.",
"title": ""
},
{
"docid": "b515eb759984047f46f9a0c27b106f47",
"text": "Visual motion estimation is challenging, due to high data rates, fast camera motions, featureless or repetitive environments, uneven lighting, and many other issues. In this work, we propose a twolayer approach for visual odometry with stereo cameras, which runs in real-time and combines feature-based matching with semi-dense direct image alignment. Our method initializes semi-dense depth estimation, which is computationally expensive, from motion that is tracked by a fast but robust feature point-based method. By that, we are not only able to efficiently estimate the pose of the camera with a high frame rate, but also to reconstruct the 3D structure of the environment at image gradients, which is useful, e.g., for mapping and obstacle avoidance. Experiments on datasets captured by a micro aerial vehicle (MAV) show that our approach is faster than state-of-the-art methods without losing accuracy. Moreover, our combined approach achieves promising results on the KITTI dataset, which is very challenging for direct methods, because of the low frame rate in conjunction with fast motion.",
"title": ""
},
{
"docid": "3c86c8681deb58a319c2aa27c795b9a1",
"text": "By means of the Ginzburg-Landau theory of phase transitions, we study a non-isothermal model to characterize the austenite-martensite transition in shape memory alloys. In the first part of this paper, the onedimensional model proposed in [3] is modified by varying the expression of the free energy. In this way, the description of the phenomenon of hysteresis, typical of these materials, is improved and the related stressstrain curves are recovered. Then, a generalization of this model to the three dimensional case is proposed and its consistency with the principles of thermodynamics is proved. Unlike other three dimensional models, the transition is characterized by a scalar valued order parameter φ and the Ginzburg-Landau equation, ruling the evolution of φ, allows us to prove a maximum principle, ensuring the boundedness of φ itself.",
"title": ""
},
{
"docid": "3d10793b2e4e63e7d639ff1e4cdf04b6",
"text": "Research in signal processing shows that a variety of transforms have been introduced to map the data from the original space into the feature space, in order to efficiently analyze a signal. These techniques differ in their basis functions, that is used for projecting the signal into a higher dimensional space. One of the widely used schemes for quasi-stationary and non-stationary signals is the time-frequency (TF) transforms, characterized by specific kernel functions. This work introduces a novel class of Ramanujan Fourier Transform (RFT) based TF transform functions, constituted by Ramanujan sums (RS) basis. The proposed special class of transforms offer high immunity to noise interference, since the computation is carried out only on co-resonant components, during analysis of signals. Further, we also provide a 2-D formulation of the RFT function. Experimental validation using synthetic examples, indicates that this technique shows potential for obtaining relatively sparse TF-equivalent representation and can be optimized for characterization of certain real-life signals.",
"title": ""
},
{
"docid": "96f02d68f992d21733890d5f929975de",
"text": "A neural network (NN)-based adaptive controller with an observer is proposed for the trajectory tracking of robotic manipulators with unknown dynamics nonlinearities. It is assumed that the robotic manipulator has only joint angle position measurements. A linear observer is used to estimate the robot joint angle velocity, while NNs are employed to further improve the control performance of the controlled system through approximating the modified robot dynamics function. The adaptive controller for robots with an observer can guarantee the uniform ultimate bounds of the tracking errors and the observer errors as well as the bounds of the NN weights. For performance comparisons, the conventional adaptive algorithm with an observer using linearity in parameters of the robot dynamics is also developed in the same control framework as the NN approach for online approximating unknown nonlinearities of the robot dynamics. Main theoretical results for designing such an observer-based adaptive controller with the NN approach using multilayer NNs with sigmoidal activation functions, as well as with the conventional adaptive approach using linearity in parameters of the robot dynamics are given. The performance comparisons between the NN approach and the conventional adaptation approach with an observer is carried out to show the advantages of the proposed control approaches through simulation studies.",
"title": ""
},
{
"docid": "00223ccf5b5aebfc23c76afb7192e3f7",
"text": "Computer Security System / technology have passed through several changes. The trends have been from what you know (e.g. password, PIN, etc) to what you have (ATM card, Driving License, etc) and presently to who you are (Biometry) or combinations of two or more of the trios. This technology (biometry) has come to solve the problems identified with knowledge-based and token-based authentication systems. It is possible to forget your password and what you have can as well be stolen. The security of determining who you are is referred to as BIOMETRIC. Biometric, in a nutshell, is the use of your body as password. This paper explores the various methods of biometric identification that have evolved over the years and the features used for each modality.",
"title": ""
},
{
"docid": "984dba43888e7a3572d16760eba6e9a5",
"text": "This study developed an integrated model to explore the antecedents and consequences of online word-of-mouth in the context of music-related communication. Based on survey data from college students, online word-of-mouth was measured with two components: online opinion leadership and online opinion seeking. The results identified innovativeness, Internet usage, and Internet social connection as significant predictors of online word-of-mouth, and online forwarding and online chatting as behavioral consequences of online word-of-mouth. Contrary to the original hypothesis, music involvement was found not to be significantly related to online word-of-mouth. Theoretical implications of the findings and future research directions are discussed.",
"title": ""
},
{
"docid": "cdcb06f49cd2c3756b238718d2530bea",
"text": "Logging statements produce logs that assist in understanding system behavior, monitoring choke-points and debugging. Prior research demonstrated the importance of logging statements in operating, understanding and improving software systems. The importance of logs has lead to a new market of log management and processing tools. However, logs are often unstable, i.e., the logging statements that generate logs are often changed without the consideration of other stakeholders, causing misleading results and failures of log processing tools. In order to proactively mitigate such issues that are caused by unstable logging statements, in this paper we empirically study the stability of logging statements in four open source applications namely:Liferay, ActiveMQ, Camel and Cloud Stack. We find that 20-45% of the logging statements in our studied applications change throughout their lifetime. The median number of days between the introduction of a logging statement and the first change to that statement is between 1 and 17 in our studied applications. These numbers show that in order to reduce maintenance effort, developers of log processing tools must be careful when selecting the logging statements on which they will let their tools depend. In this paper, we make an important first step towards assisting developers of log processing tools in determining whether a logging statement is likely to remain unchanged in the future. Using random forest classifiers, we examine which metrics are important for understanding whether a logging statement will change. We show that our classifiers achieve 83%-91% precision and 65%-85% recall in the four studied applications. We find that file ownership, developer experience, log density and SLOC are important metrics for determining whether a logging statement will change in the future. Developers can use this knowledge to build more robust log processing tools, by making those tools depend on logs that are generated by logging statements that are likely to remain unchanged.",
"title": ""
},
{
"docid": "3ca04efcb370e8a30ab5ad42d1d2d047",
"text": "The exceptionally adhesive foot of the gecko remains clean in dirty environments by shedding contaminants with each step. Synthetic gecko-inspired adhesives have achieved similar attachment strengths to the gecko on smooth surfaces, but the process of contact self-cleaning has yet to be effectively demonstrated. Here, we present the first gecko-inspired adhesive that has matched both the attachment strength and the contact self-cleaning performance of the gecko's foot on a smooth surface. Contact self-cleaning experiments were performed with three different sizes of mushroom-shaped elastomer microfibres and five different sizes of spherical silica contaminants. Using a load-drag-unload dry contact cleaning process similar to the loads acting on the gecko foot during locomotion, our fully contaminated synthetic gecko adhesives could recover lost adhesion at a rate comparable to that of the gecko. We observed that the relative size of contaminants to the characteristic size of the microfibres in the synthetic adhesive strongly determined how and to what degree the adhesive recovered from contamination. Our approximate model and experimental results show that the dominant mechanism of contact self-cleaning is particle rolling during the drag process. Embedding of particles between adjacent fibres was observed for particles with diameter smaller than the fibre tips, and further studied as a temporary cleaning mechanism. By incorporating contact self-cleaning capabilities, real-world applications of synthetic gecko adhesives, such as reusable tapes, clothing closures and medical adhesives, would become feasible.",
"title": ""
},
{
"docid": "2c1604c1592b974c78568bbe2f71485c",
"text": "BACKGROUND\nA self-rated measure of health anxiety should be sensitive across the full range of intensity (from mild concern to frank hypochondriasis) and should differentiate people suffering from health anxiety from those who have actual physical illness but who are not excessively concerned about their health. It should also encompass the full range of clinical symptoms characteristic of clinical hypochondriasis. The development and validation of such a scale is described.\n\n\nMETHOD\nThree studies were conducted. First, the questionnaire was validated by comparing the responses of patients suffering from hypochondriasis with those suffering from hypochondriasis and panic disorder, panic disorder, social phobia and non-patient controls. Secondly, a state version of the questionnaire was administered to patients undergoing cognitive-behavioural treatment or wait-list in order to examine the measure's sensitivity to change. In the third study, a shortened version was developed and validated in similar types of sample, and in a range of samples of people seeking medical help for physical illness.\n\n\nRESULTS\nThe scale was found to be reliable and to have a high internal consistency. Hypochondriacal patients scored significantly higher than anxiety disorder patients, including both social phobic patients and panic disorder patients as well as normal controls. In the second study, a 'state' version of the scale was found to be sensitive to treatment effects, and to correlate very highly with a clinician rating based on an interview of present clinical state. A development and refinement of the scale (intended to reflect more fully the range of symptoms of and reactions to hypochondriasis) was found to be reliable and valid. A very short (14 item) version of the scale was found to have comparable properties to the full length scale.\n\n\nCONCLUSIONS\nThe HAI is a reliable and valid measure of health anxiety. It is likely to be useful as a brief screening instrument, as there is a short form which correlates highly with the longer version.",
"title": ""
},
{
"docid": "3bd31dfc1cc1bd1868cc6d0c19503bb5",
"text": "Music genre recognition based on visual representation has been successfully explored over the last years. Classifiers trained with textural descriptors (e.g., Local Binary Patterns, Local Phase Quantization, and Gabor filters) extracted from the spectrograms have achieved state-of-the-art results on several music datasets. In this work, though, we argue that we can go further with the time-frequency analysis through the use of representation learning. To show that, we compare the results obtained with a Convolutional Neural Network (CNN) with the results obtained by using handcrafted features and SVM classifiers. In addition, we have performed experiments fusing the results obtained with learned features and handcrafted features to assess the complementarity between these representations for the music classification task. Experiments were conducted on three music databases with distinct characteristics, specifically a western music collection largely used in research benchmarks (ISMIR 2004 Database), a collection of Latin American music (LMD database), and a collection of field recordings of ethnic African music. Our experiments show that the CNN compares favorably to other classifiers in several scenarios, hence, it is a very interesting alternative for music genre recognition. Considering the African database, the CNN surpassed the handcrafted representations and also the state-of-the-art by a margin. In the case of the LMD database, the combination of CNN and Robust Local Binary Pattern achieved a recognition rate of 92%, which to the best of our knowledge, is the best result (using an artist filter) on this dataset so far. On the ISMIR 2004 dataset, although the CNN did not improve the state of the art, it performed better than the classifiers based individually on other kind of features.",
"title": ""
},
{
"docid": "7f52960fb76c3c697ef66ffee91b13ee",
"text": "The aim of this work was to explore the feasibility of combining hot melt extrusion (HME) with 3D printing (3DP) technology, with a view to producing different shaped tablets which would be otherwise difficult to produce using traditional methods. A filament extruder was used to obtain approx. 4% paracetamol loaded filaments of polyvinyl alcohol with characteristics suitable for use in fused-deposition modelling 3DP. Five different tablet geometries were successfully 3D-printed-cube, pyramid, cylinder, sphere and torus. The printing process did not affect the stability of the drug. Drug release from the tablets was not dependent on the surface area but instead on surface area to volume ratio, indicating the influence that geometrical shape has on drug release. An erosion-mediated process controlled drug release. This work has demonstrated the potential of 3DP to manufacture tablet shapes of different geometries, many of which would be challenging to manufacture by powder compaction.",
"title": ""
},
{
"docid": "8d7e8ee0f6305d50276d25ce28bcdf9c",
"text": "The advancement of visual sensing has introduced better capturing of the discrete information from a complex, crowded scene for assisting in the analysis. However, after reviewing existing system, we find that majority of the work carried out till date is associated with significant problems in modeling event detection as well as reviewing abnormality of the given scene. Therefore, the proposed system introduces a model that is capable of identifying the degree of abnormality for an event captured on the crowded scene using unsupervised training methodology. The proposed system contributes to developing a novel region-wise repository to extract the contextual information about the discrete-event for a given scene. The study outcome shows highly improved the balance between the computational time and overall accuracy as compared to the majority of the standard research work emphasizing on event",
"title": ""
},
{
"docid": "73dd0790faebf8e6554f855c7e6e0285",
"text": "Forecasting the future observations of time-series data can be performed by modeling the trend and fluctuations from the observed data. Many classical time-series analysis models like Autoregressive model (AR) and its variants have been developed to achieve such forecasting ability. While they are often based on the white noise assumption to model the data fluctuations, a more general Brownian motion has been adopted that results in Ornstein-Uhlenbeck (OU) process. The OU process has gained huge successes in predicting the future observations over many genres of time series, however, it is still limited in modeling simple diffusion dynamics driven by a single persistent factor that never evolves over time. However, in many real problems, a mixture of hidden factors are usually present, and when and how frequently they appear or disappear are unknown ahead of time. This imposes a challenge that inspires us to develop a Mixture Factorized OU process (MFOUP) to model evolving factors. The new model is able to capture the changing states of multiple mixed hidden factors, from which we can infer their roles in driving the movements of time series. We conduct experiments on three forecasting problems, covering sensor and market data streams. The results show its competitive performance on predicting future observations and capturing evolution patterns of hidden factors as compared with the other algorithms.",
"title": ""
},
{
"docid": "34c1910dbd746368671b2b795114edfe",
"text": "Article history: Received: 4.7.2015. Received in revised form: 9.1.2016. Accepted: 29.1.2016. This paper presents a design of a distributed switched reluctance motor for an integrated motorfan system. Unlike a conventional compact motor structure, the rotor is distributed into the ends of the impeller blades. This distributed structure of motor makes more space for airflow to pass through so that the system efficiency is highly improved. Simultaneously, the distributed structure gives the motor a higher torque, better efficiency and heat dissipation. The paper first gives an initial design of a switched reluctance motor based on system structure constraints and output equations, then it predicts the machine performance and determines phase current and winding turns based on equivalent magnetic circuit analysis; finally it validates and refines the analytical design with 3D transient finite element analysis. It is found that the analytical performance prediction agrees well with finite element analysis results except for the weakness on core losses estimation. The results of the design shows that the distributed switched reluctance motor can produce a large torque of pretty high efficiency at specified speeds.",
"title": ""
},
{
"docid": "1b5a800affc14f3693004d021677357d",
"text": "Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.",
"title": ""
},
{
"docid": "aafda1cab832f1fe92ce406676e3760f",
"text": "In this paper, we present MADAMIRA, a system for morphological analysis and disambiguation of Arabic that combines some of the best aspects of two previously commonly used systems for Arabic processing, MADA (Habash and Rambow, 2005; Habash et al., 2009; Habash et al., 2013) and AMIRA (Diab et al., 2007). MADAMIRA improves upon the two systems with a more streamlined Java implementation that is more robust, portable, extensible, and is faster than its ancestors by more than an order of magnitude. We also discuss an online demo (see http://nlp.ldeo.columbia.edu/madamira/) that highlights these aspects.",
"title": ""
},
{
"docid": "f99f059e7a3a94d219f0059de1e3d655",
"text": "Membrane scaling during the treatment of aqueous solutions containing sparingly soluble salts by direct contact membrane distillation (DCMD) was investigated. The results reveal that membrane scaling caused by CaSO4 was more severe than that by CaCO3 or silicate. However, under the experimental condition used in this study and at feed and distillate temperature of 20 C and 40 C, respectively, CaSO4 scaling occurred only after a sufficiently long induction time of up to 25 h (corresponding to a saturation index of up to 1.5). The induction period decreased and the size of the CaSO4 crystals increased as the feed temperature increased. SEM analysis reveals that prior to the onset of CaSO4 scaling, the membrane surface was relatively clean and was completely free of any large crystals. Subsequently, a simple operational regime involving regular membrane flushing to reset the induction period was developed and was proven to be effective in controlling CaSO4 scaling. At a low system recovery, the permeate flux was constant despite the fact that the feed solution was always at a super saturation condition. Results reported here also confirm the interplay between induction time and the saturation index. Crown Copyright 2011 Published by Elsevier B.V. All rights reserved.",
"title": ""
}
] | scidocsrr |
ba2d6e33064b61517dfb0593665c3c47 | Graph Frequency Analysis of Brain Signals | [
{
"docid": "97490d6458ba9870ce22b3418c558c58",
"text": "The brain is expensive, incurring high material and metabolic costs for its size — relative to the size of the body — and many aspects of brain network organization can be mostly explained by a parsimonious drive to minimize these costs. However, brain networks or connectomes also have high topological efficiency, robustness, modularity and a 'rich club' of connector hubs. Many of these and other advantageous topological properties will probably entail a wiring-cost premium. We propose that brain organization is shaped by an economic trade-off between minimizing costs and allowing the emergence of adaptively valuable topological patterns of anatomical or functional connectivity between multiple neuronal populations. This process of negotiating, and re-negotiating, trade-offs between wiring cost and topological value continues over long (decades) and short (millisecond) timescales as brain networks evolve, grow and adapt to changing cognitive demands. An economical analysis of neuropsychiatric disorders highlights the vulnerability of the more costly elements of brain networks to pathological attack or abnormal development.",
"title": ""
},
{
"docid": "e94afab2ce61d7426510a5bcc88f7ca8",
"text": "Community detection is an important task in network analysis, in which we aim to learn a network partition that groups together vertices with similar community-level connectivity patterns. By finding such groups of vertices with similar structural roles, we extract a compact representation of the network’s large-scale structure, which can facilitate its scientific interpretation and the prediction of unknown or future interactions. Popular approaches, including the stochastic block model, assume edges are unweighted, which limits their utility by discarding potentially useful information. We introduce the weighted stochastic block model (WSBM), which generalizes the stochastic block model to networks with edge weights drawn from any exponential family distribution. This model learns from both the presence and weight of edges, allowing it to discover structure that would otherwise be hidden when weights are discarded or thresholded. We describe a Bayesian variational algorithm for efficiently approximating this model’s posterior distribution over latent block structures. We then evaluate the WSBM’s performance on both edge-existence and edge-weight prediction tasks for a set of real-world weighted networks. In all cases, the WSBM performs as well or better than the best alternatives on these tasks. community detection, weighted relational data, block models, exponential family, variational Bayes.",
"title": ""
}
] | [
{
"docid": "846ae985f61a0dcdb1ff3a2226c1b41a",
"text": "OBJECTIVE\nThis article provides an overview of tactile displays. Its goal is to assist human factors practitioners in deciding when and how to employ the sense of touch for the purpose of information representation. The article also identifies important research needs in this area.\n\n\nBACKGROUND\nFirst attempts to utilize the sense of touch as a medium for communication date back to the late 1950s. For the next 35 years progress in this area was relatively slow, but recent years have seen a surge in the interest and development of tactile displays and the integration of tactile signals in multimodal interfaces. A thorough understanding of the properties of this sensory channel and its interaction with other modalities is needed to ensure the effective and robust use of tactile displays.\n\n\nMETHODS\nFirst, an overview of vibrotactile perception is provided. Next, the design of tactile displays is discussed with respect to available technologies. The potential benefit of including tactile cues in multimodal interfaces is discussed. Finally, research needs in the area of tactile information presentation are highlighted.\n\n\nRESULTS\nThis review provides human factors researchers and interface designers with the requisite knowledge for creating effective tactile interfaces. It describes both potential benefits and limitations of this approach to information presentation.\n\n\nCONCLUSION\nThe sense of touch represents a promising means of supporting communication and coordination in human-human and human-machine systems.\n\n\nAPPLICATION\nTactile interfaces can support numerous functions, including spatial orientation and guidance, attention management, and sensory substitution, in a wide range of domains.",
"title": ""
},
{
"docid": "942be0aa4dab5904139919351d6d63d4",
"text": "Since Hinton and Salakhutdinov published their landmark science paper in 2006 ending the previous neural-network winter, research in neural networks has increased dramatically. Researchers have applied neural networks seemingly successfully to various topics in the field of computer science. However, there is a risk that we overlook other methods. Therefore, we take a recent end-to-end neural-network-based work (Dhingra et al., 2018) as a starting point and contrast this work with more classical techniques. This prior work focuses on the LAMBADA word prediction task, where broad context is used to predict the last word of a sentence. It is often assumed that neural networks are good at such tasks where feature extraction is important. We show that with simpler syntactic and semantic features (e.g. Across Sentence Boundary (ASB) N-grams) a state-ofthe-art neural network can be outperformed. Our discriminative language-model-based approach improves the word prediction accuracy from 55.6% to 58.9% on the LAMBADA task. As a next step, we plan to extend this work to other language modeling tasks.",
"title": ""
},
{
"docid": "d647fc2b5635a3dfcebf7843fef3434c",
"text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.",
"title": ""
},
{
"docid": "55dbe73527f91af939e068a76d0200b7",
"text": "With an ageing population in an industrialised world, the global burden of stroke is staggering millions of strokes a year. Hemiparesis is one of the most.Lancet. Rehabilitation of hemiparesis after stroke with a mirror. Altschuler EL, Wisdom SB, Stone L, Foster C, Galasko D.Rehabilitation of the severely affected paretic arm after stroke represents a major challenge, especially in the presence of sensory impairment. Objective.in patients after stroke. This article reviews the evidence for motor imagery or.",
"title": ""
},
{
"docid": "652366f6feab8f3792c0fcb74318472d",
"text": "OBJECTIVE\nTo evaluate the prefrontal space ratio (PFSR) in second- and third-trimester euploid fetuses and fetuses with trisomy 21.\n\n\nMETHODS\nThis was a retrospective study utilizing stored mid-sagittal two-dimensional images of second- and third-trimester fetal faces that were recorded during prenatal ultrasound examinations at the Department of Prenatal Medicine at the University of Tuebingen, Germany and at a private center for prenatal medicine in Nuremberg, Germany. For the normal range, 279 euploid pregnancies between 15 and 40 weeks' gestation were included. The results were compared with 91 cases with trisomy 21 between 15 and 40 weeks. For the ratio measurement, a line was drawn between the leading edge of the mandible and the maxilla (MM line) and extended in front of the forehead. The ratio of the distance between the leading edge of the skull and the leading edge of the skin (d1) to the distance between the skin and the point where the MM line was intercepted (d2) was calculated. The PFSR was determined by dividing d2 by d1.\n\n\nRESULTS\nIn the euploid and trisomy 21 groups, the median gestational age at the time of ultrasound examination was 21.1 (range, 15.0-40.0) and 21.4 (range, 15.0-40.3) weeks, respectively. Multiple regression analysis showed that PFSR was independent of maternal and gestational age. In the euploid group, the mean PFSR was 0.97 ± 0.29. In fetuses with trisomy 21, the mean PFSR was 0.2 ± 0.38 (P < 0.0001). The PFSR was below the 5(th) centile in 14 (5.0%) euploid fetuses and in 72 (79.1%) fetuses with trisomy 21.\n\n\nCONCLUSION\nThe PFSR is a simple and effective marker in second- and third-trimester screening for trisomy 21.",
"title": ""
},
{
"docid": "3dd238bc2b51b3aaf9b8b6900fc82d12",
"text": "Nowadays many applications are generating streaming data for an example real-time surveillance, internet traffic, sensor data, health monitoring systems, communication networks, online transactions in the financial market and so on. Data Streams are temporally ordered, fast changing, massive, and potentially infinite sequence of data. Data Stream mining is a very challenging problem. This is due to the fact that data streams are of tremendous volume and flows at very high speed which makes it impossible to store and scan streaming data multiple time. Concept evolution in streaming data further magnifies the challenge of working with streaming data. Clustering is a data stream mining task which is very useful to gain insight of data and data characteristics. Clustering is also used as a pre-processing step in over all mining process for an example clustering is used for outlier detection and for building classification model. In this paper we will focus on the challenges and necessary features of data stream clustering techniques, review and compare the literature for data stream clustering by example and variable, describe some real world applications of data stream clustering, and tools for data stream clustering.",
"title": ""
},
{
"docid": "ce1d25b3d2e32f903ce29470514abcce",
"text": "We present a method to generate a robot control strategy that maximizes the probability to accomplish a task. The task is given as a Linear Temporal Logic (LTL) formula over a set of properties that can be satisfied at the regions of a partitioned environment. We assume that the probabilities with which the properties are satisfied at the regions are known, and the robot can determine the truth value of a proposition only at the current region. Motivated by several results on partitioned-based abstractions, we assume that the motion is performed on a graph. To account for noisy sensors and actuators, we assume that a control action enables several transitions with known probabilities. We show that this problem can be reduced to the problem of generating a control policy for a Markov Decision Process (MDP) such that the probability of satisfying an LTL formula over its states is maximized. We provide a complete solution for the latter problem that builds on existing results from probabilistic model checking. We include an illustrative case study.",
"title": ""
},
{
"docid": "284c52c29b5a5c2d3fbd0a7141353e35",
"text": "This paper presents results of patient experiments using a new gait-phase detection sensor (GPDS) together with a programmable functional electrical stimulation (FES) system for subjects with a dropped-foot walking dysfunction. The GPDS (sensors and processing unit) is entirely embedded in a shoe insole and detects in real time four phases (events) during the gait cycle: stance, heel off, swing, and heel strike. The instrumented GPDS insole consists of a miniature gyroscope that measures the angular velocity of the foot and three force sensitive resistors that measure the force load on the shoe insole at the heel and the metatarsal bones. The extracted gait-phase signal is transmitted from the embedded microcontroller to the electrical stimulator and used in a finite state control scheme to time the electrical stimulation sequences. The electrical stimulations induce muscle contractions in the paralyzed muscles leading to a more physiological motion of the affected leg. The experimental results of the quantitative motion analysis during walking of the affected and nonaffected sides showed that the use of the combined insole and FES system led to a significant improvement in the gait-kinematics of the affected leg. This combined sensor and stimulation system has the potential to serve as a walking aid for rehabilitation training or permanent use in a wide range of gait disabilities after brain stroke, spinal-cord injury, or neurological diseases.",
"title": ""
},
{
"docid": "5275184686a8453a1922cec7a236b66d",
"text": "Children’s sense of relatedness is vital to their academic motivation from 3rd to 6th grade. Children’s (n 641) reports of relatedness predicted changes in classroom engagement over the school year and contributed over and above the effects of perceived control. Regression and cumulative risk analyses revealed that relatedness to parents, teachers, and peers each uniquely contributed to students’ engagement, especially emotional engagement. Girls reported higher relatedness than boys, but relatedness to teachers was a more salient predictor of engagement for boys. Feelings of relatedness to teachers dropped from 5th to 6th grade, but the effects of relatedness on engagement were stronger for 6th graders. Discussion examines theoretical, empirical, and practical implications of relatedness as a key predictor of children’s academic motivation and performance.",
"title": ""
},
{
"docid": "0b8c51f823cb55cbccfae098e98f28b3",
"text": "In this study, we investigate whether the “out of body” vibrotactile illusion known as funneling could be applied to enrich and thereby improve the interaction performance on a tablet-sized media device. First, a series of pilot tests was taken to determine the appropriate operational conditions and parameters (such as the tablet size, holding position, minimal required vibration amplitude, and the effect of matching visual feedback) for a two-dimensional (2D) illusory tactile rendering method. Two main experiments were then conducted to validate the basic applicability and effectiveness of the rendering method, and to further demonstrate how the illusory tactile feedback could be deployed in an interactive application and actually improve user performance. Our results showed that for a tablet-sized device (e.g., iPad mini and iPad), illusory perception was possible (localization performance of up to 85%) using a rectilinear grid with a resolution of 5 $$\\times $$ × 7 (grid size: 2.5 cm) with matching visual feedback. Furthermore, the illusory feedback was found to be a significant factor in improving the user performance in a 2D object search/attention task.",
"title": ""
},
{
"docid": "77df82cf7a9ddca2038433fa96a43cef",
"text": "In this study, new algorithms are proposed for exposing forgeries in soccer images. We propose a new and automatic algorithm to extract the soccer field, field side and the lines of field in order to generate an image of real lines for forensic analysis. By comparing the image of real lines and the lines in the input image, the forensic analyzer can easily detect line displacements of the soccer field. To expose forgery in the location of a player, we measure the height of the player using the geometric information in the soccer image and use the inconsistency of the measured height with the true height of the player as a clue for detecting the displacement of the player. In this study, two novel approaches are proposed to measure the height of a player. In the first approach, the intersections of white lines in the soccer field are employed for automatic calibration of the camera. We derive a closed-form solution to calculate different camera parameters. Then the calculated parameters of the camera are used to measure the height of a player using an interactive approach. In the second approach, the geometry of vanishing lines and the dimensions of soccer gate are used to measure a player height. Various experiments using real and synthetic soccer images show the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "8b84dc47c6a9d39ef1d094aa173a954c",
"text": "Named entity recognition (NER) is a subtask of information extraction that seeks to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. We use the JavaNLP repository(http://nlp.stanford.edu/javanlp/ ) for its implementation of a Conditional Random Field(CRF) and a Conditional Markov Model(CMM), also called a Maximum Entropy Markov Model. We have obtained results on majority voting with different labeling schemes, with backward and forward parsing of the CMM, and also some results when we trained a decision tree to take a decision based on the outputs of the different labeling schemes. We have also tried to solve the problem of label inconsistency issue by attempting the naive approach of enforcing hard label-consistency by choosing the majority entity for a sequence of tokens, in the specific test document, as well as the whole test corpus, and managed to get reasonable gains. We also attempted soft label consistency in the following way. We use a portion of the training data to train a CRF to make predictions on the rest of the train data and on the test data. We then train a second CRF with the majority label predictions as additional input features.",
"title": ""
},
{
"docid": "2c1f93d4e517fe56a5ebf668e8a0bc12",
"text": "The Internet was designed with the end-to-end principle where the network layer provided merely the best-effort forwarding service. This design makes it challenging to add new services into the Internet infrastructure. However, as the Internet connectivity becomes a commodity, users and applications increasingly demand new in-network services. This paper proposes PacketCloud, a cloudlet-based open platform to host in-network services. Different from standalone, specialized middleboxes, cloudlets can efficiently share a set of commodity servers among different services, and serve the network traffic in an elastic way. PacketCloud can help both Internet Service Providers (ISPs) and emerging application/content providers deploy their services at strategic network locations. We have implemented a proof-of-concept prototype of PacketCloud. PacketCloud introduces a small additional delay, and can scale well to handle high-throughput data traffic. We have evaluated PacketCloud in both a fully functional emulated environment, and the real Internet.",
"title": ""
},
{
"docid": "4f2ebb2640a36651fd8c01f3eeb0e13e",
"text": "This paper addresses pixel-level segmentation of a human body from a single image. The problem is formulated as a multi-region segmentation where the human body is constrained to be a collection of geometrically linked regions and the background is split into a small number of distinct zones. We solve this problem in a Bayesian framework for jointly estimating articulated body pose and the pixel-level segmentation of each body part. Using an image likelihood function that simultaneously generates and evaluates the image segmentation corresponding to a given pose, we robustly explore the posterior body shape distribution using a data-driven, coarse-to-fine Metropolis Hastings sampling scheme that includes a strongly data-driven proposal term.",
"title": ""
},
{
"docid": "6bc611936d412dde15999b2eb179c9e2",
"text": "Smith-Lemli-Opitz syndrome, a severe developmental disorder associated with multiple congenital anomalies, is caused by a defect of cholesterol biosynthesis. Low cholesterol and high concentrations of its direct precursor, 7-dehydrocholesterol, in plasma and tissues are the diagnostic biochemical hallmarks of the syndrome. The plasma sterol concentrations correlate with severity and disease outcome. Mutations in the DHCR7 gene lead to deficient activity of 7-dehydrocholesterol reductase (DHCR7), the final enzyme of the cholesterol biosynthetic pathway. The human DHCR7 gene is localised on chromosome 11q13 and its structure has been characterized. Ninetyone different mutations in the DHCR7 gene have been published to date. This paper is a review of the clinical, biochemical and molecular genetic aspects.",
"title": ""
},
{
"docid": "9825e8a24aba301c4c7be3b8b4c4dde5",
"text": "Being a cross-camera retrieval task, person re-identification suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle) adaptation. CamStyle can serve as a data augmentation approach that smooths the camera style disparities. Specifically, with CycleGAN, labeled training images can be style-transferred to each camera, and, along with the original training samples, form the augmented training set. This method, while increasing data diversity against over-fitting, also incurs a considerable level of noise. In the effort to alleviate the impact of noise, the label smooth regularization (LSR) is adopted. The vanilla version of our method (without LSR) performs reasonably well on few-camera systems in which over-fitting often occurs. With LSR, we demonstrate consistent improvement in all systems regardless of the extent of over-fitting. We also report competitive accuracy compared with the state of the art. Code is available at: https://github.com/zhunzhong07/CamStyle",
"title": ""
},
{
"docid": "b876e62db8a45ab17d3a9d217e223eb7",
"text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.",
"title": ""
},
{
"docid": "88fa70ef8c6dfdef7d1c154438ff53c2",
"text": "There has been substantial progress in the field of text based sentiment analysis but little effort has been made to incorporate other modalities. Previous work in sentiment analysis has shown that using multimodal data yields to more accurate models of sentiment. Efforts have been made towards expressing sentiment as a spectrum of intensity rather than just positive or negative. Such models are useful not only for detection of positivity or negativity, but also giving out a score of how positive or negative a statement is. Based on the state of the art studies in sentiment analysis, prediction in terms of sentiment score is still far from accurate, even in large datasets [27]. Another challenge in sentiment analysis is dealing with small segments or micro opinions as they carry less context than large segments thus making analysis of the sentiment harder. This paper presents a Ph.D. thesis shaped towards comprehensive studies in multimodal micro-opinion sentiment intensity analysis.",
"title": ""
},
{
"docid": "9924e44d94d00a7a3dbd313409f5006a",
"text": "Multiple-instance problems arise from the situations where training class labels are attached to sets of samples (named bags), instead of individual samples within each bag (called instances). Most previous multiple-instance learning (MIL) algorithms are developed based on the assumption that a bag is positive if and only if at least one of its instances is positive. Although the assumption works well in a drug activity prediction problem, it is rather restrictive for other applications, especially those in the computer vision area. We propose a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple-instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. MILES maps each bag into a feature space defined by the instances in the training bags via an instance similarity measure. This feature mapping often provides a large number of redundant or irrelevant features. Hence, 1-norm SVM is applied to select important features as well as construct classifiers simultaneously. We have performed extensive experiments. In comparison with other methods, MILES demonstrates competitive classification accuracy, high computation efficiency, and robustness to labeling uncertainty",
"title": ""
},
{
"docid": "08d8e372c5ae4eef9848552ee87fbd64",
"text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical …",
"title": ""
}
] | scidocsrr |
8c108461114f056041167732a0fced25 | Evolving Deep Recurrent Neural Networks Using Ant Colony Optimization | [
{
"docid": "83cace7cc84332bc30eeb6bc957ea899",
"text": "Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing decision makers in many areas. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of forecasting problems with a high degree of accuracy. However, using ANNs to model linear problems have yielded mixed results, and hence; it is not wise to apply ANNs blindly to any type of data. Autoregressive integrated moving average (ARIMA) models are one of the most popular linear models in time series forecasting, which have been widely applied in order to construct more accurate hybrid models during the past decade. Although, hybrid techniques, which decompose a time series into its linear and nonlinear components, have recently been shown to be successful for single models, these models have some disadvantages. In this paper, a novel hybridization of artificial neural networks and ARIMA model is proposed in order to overcome mentioned limitation of ANNs and yield more general and more accurate forecasting model than traditional hybrid ARIMA-ANNs models. In our proposed model, the unique advantages of ARIMA models in linear modeling are used in order to identify and magnify the existing linear structure in data, and then a neural network is used in order to determine a model to capture the underlying data generating process and predict, using preprocessed data. Empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy ybrid achieved by traditional h",
"title": ""
}
] | [
{
"docid": "d3049fee1ed622515f5332bcfa3bdd7b",
"text": "PURPOSE\nTo prospectively analyze, using validated outcome measures, symptom improvement in patients with mild to moderate cubital tunnel syndrome treated with rigid night splinting and activity modifications.\n\n\nMETHODS\nNineteen patients (25 extremities) were enrolled prospectively between August 2009 and January 2011 following a diagnosis of idiopathic cubital tunnel syndrome. Patients were treated with activity modifications as well as a 3-month course of rigid night splinting maintaining 45° of elbow flexion. Treatment failure was defined as progression to operative management. Outcome measures included patient-reported splinting compliance as well as the Quick Disabilities of the Arm, Shoulder, and Hand questionnaire and the Short Form-12. Follow-up included a standardized physical examination. Subgroup analysis included an examination of the association between splinting success and ulnar nerve hypermobility.\n\n\nRESULTS\nTwenty-four of 25 extremities were available at mean follow-up of 2 years (range, 15-32 mo). Twenty-one of 24 (88%) extremities were successfully treated without surgery. We observed a high compliance rate with the splinting protocol during the 3-month treatment period. Quick Disabilities of the Arm, Shoulder, and Hand scores improved significantly from 29 to 11, Short Form-12 physical component summary score improved significantly from 45 to 54, and Short Form-12 mental component summary score improved significantly from 54 to 62. Average grip strength increased significantly from 32 kg to 35 kg, and ulnar nerve provocative testing resolved in 82% of patients available for follow-up examination.\n\n\nCONCLUSIONS\nRigid night splinting when combined with activity modification appears to be a successful, well-tolerated, and durable treatment modality in the management of cubital tunnel syndrome. We recommend that patients presenting with mild to moderate symptoms consider initial treatment with activity modification and rigid night splinting for 3 months based on a high likelihood of avoiding surgical intervention.\n\n\nTYPE OF STUDY/LEVEL OF EVIDENCE\nTherapeutic II.",
"title": ""
},
{
"docid": "0c9fa24357cb09cea566b7b2493390c4",
"text": "Conflict is a common phenomenon in interactions both between individuals, and between groups of individuals. As CSCW is concerned with the design of systems to support such interactions, an examination of conflict, and the various ways of dealing with it, would clearly be of benefit. This chapter surveys the literature that is most relevant to the CSCW community, covering many disciplines that have addressed particular aspects of conflict. The chapter is organised around a series of assertions, representing both commonly held beliefs about conflict, and hypotheses and theories drawn from the literature. In many cases no definitive statement can be made about the truth or falsity of an assertion: the empirical evidence both supporting and opposing is examined, and pointers are provided to further discussion in the literature. One advantage of organising the survey in this way is that it need not be read in order. Each assertion forms a self-contained essay, with cross-references to related assertions. Hence, treat the chapter as a resource to be dipped into rather than read in sequence. This introduction sets the scene by defining conflict, and providing a rationale for studying conflict in relation to CSCW. The assertions are presented in section 2, and form the main body of the chapter. Finally, section 3 relates the assertions to current work on CSCW systems.",
"title": ""
},
{
"docid": "fde0f116dfc929bf756d80e2ce69b1c7",
"text": "The particle swarm optimization (PSO), new to the electromagnetics community, is a robust stochastic evolutionary computation technique based on the movement and intelligence of swarms. This paper introduces a conceptual overview and detailed explanation of the PSO algorithm, as well as how it can be used for electromagnetic optimizations. This paper also presents several results illustrating the swarm behavior in a PSO algorithm developed by the authors at UCLA specifically for engineering optimizations (UCLA-PSO). Also discussed is recent progress in the development of the PSO and the special considerations needed for engineering implementation including suggestions for the selection of parameter values. Additionally, a study of boundary conditions is presented indicating the invisible wall technique outperforms absorbing and reflecting wall techniques. These concepts are then integrated into a representative example of optimization of a profiled corrugated horn antenna.",
"title": ""
},
{
"docid": "13daec7c27db2b174502d358b3c19f43",
"text": "The QRS complex of the ECG signal is the reference point for the most ECG applications. In this paper, we aim to describe the design and the implementation of an embedded system for detection of the QRS complexes in real-time. The design is based on the notorious algorithm of Pan & Tompkins, with a novel simple idea for the decision stage of this algorithm. The implementation uses a circuit of the current trend, i.e. the FPGA, and it is developed with the Xilinx design tool, System Generator for DSP. In the authors’ view, the specific feature, i.e. authenticity and simplicity of the proposed model, is that the threshold value is updated from the previous smallest peak; in addition, the model is entirely designed simply with MCode blocks. The hardware design is tested with five 30 minutes data records obtained from the MIT-BIH Arrhythmia database. Its accuracy exceeds 96%, knowing that four records among the five represent the worst cases in the database. In terms of the resources utilization, our implementation occupies around 30% of the used FPGA device, namely the Xilinx Spartan 3E XC3S500.",
"title": ""
},
{
"docid": "fa0c62b91643a45a5eff7c1b1fa918f1",
"text": "This paper presents outdoor field experimental results to clarify the 4x4 MIMO throughput performance from applying multi-point transmission in the 15 GHz frequency band in the downlink of 5G cellular radio access system. The experimental results in large-cell scenario shows that up to 30 % throughput gain compared to non-multi-point transmission is achieved although the difference for the RSRP of two TPs is over 10 dB, so that the improvement for the antenna correlation is achievable and important aspect for the multi-point transmission in the 15 GHz frequency band as well as the improvement of the RSRP. Furthermore in small-cell scenario, the throughput gain of 70% and over 5 Gbps are achieved applying multi-point transmission in the condition of two different MIMO streams transmission from a single TP as distributed MIMO instead of four MIMO streams transmission from a single TP.",
"title": ""
},
{
"docid": "512d29a398f51041466884f4decec84a",
"text": "Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.2",
"title": ""
},
{
"docid": "876e56a4c859e5fc7fa0038845317da4",
"text": "The rise of Web 2.0 with its increasingly popular social sites like Twitter, Facebook, blogs and review sites has motivated people to express their opinions publicly and more frequently than ever before. This has fueled the emerging field known as sentiment analysis whose goal is to translate the vagaries of human emotion into hard data. LCI is a social channel analysis platform that taps into what is being said to understand the sentiment with the particular ability of doing so in near real-time. LCI integrates novel algorithms for sentiment analysis and a configurable dashboard with different kinds of charts including dynamic ones that change as new data is ingested. LCI has been researched and prototyped at HP Labs in close interaction with the Business Intelligence Solutions (BIS) Division and a few customers. This paper presents an overview of the architecture and some of its key components and algorithms, focusing in particular on how LCI deals with Twitter and illustrating its capabilities with selected use cases.",
"title": ""
},
{
"docid": "cd5a267c1dac92e68ba677c4a2e06422",
"text": "Person re-identification aims to robustly measure similarities between person images. The significant variation of person poses and viewing angles challenges for accurate person re-identification. The spatial layout and correspondences between query person images are vital information for tackling this problem but are ignored by most state-of-the-art methods. In this paper, we propose a novel Kronecker Product Matching module to match feature maps of different persons in an end-to-end trainable deep neural network. A novel feature soft warping scheme is designed for aligning the feature maps based on matching results, which is shown to be crucial for achieving superior accuracy. The multi-scale features based on hourglass-like networks and self residual attention are also exploited to further boost the re-identification performance. The proposed approach outperforms state-of-the-art methods on the Market-1501, CUHK03, and DukeMTMC datasets, which demonstrates the effectiveness and generalization ability of our proposed approach.",
"title": ""
},
{
"docid": "51487a368a572dc415a5a4c0d4621d4b",
"text": "Wireless sensor networks (WSNs) are an emerging technology for monitoring physical world. Different from the traditional wireless networks and ad hoc networks, the energy constraint of WSNs makes energy saving become the most important goal of various routing algorithms. For this purpose, a cluster based routing algorithm LEACH (low energy adaptive clustering hierarchy) has been proposed to organize a sensor network into a set of clusters so that the energy consumption can be evenly distributed among all the sensor nodes. Periodical cluster head voting in LEACH, however, consumes non-negligible energy and other resources. While another chain-based algorithm PEGASIS (powerefficient gathering in sensor information systems) can reduce such energy consumption, it causes a longer delay for data transmission. In this paper, we propose a routing algorithm called CCM (Chain-Cluster based Mixed routing), which makes full use of the advantages of LEACH and PEGASIS, and provide improved performance. It divides a WSN into a few chains and runs in two stages. In the first stage, sensor nodes in each chain transmit data to their own chain head node in parallel, using an improved chain routing protocol. In the second stage, all chain head nodes group as a cluster in a selforganized manner, where they transmit fused data to a voted cluster head using the cluster based routing. Experimental F. Tang (B) · M. Guo · Y. Ma Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China e-mail: tang-fl@cs.sjtu.edu.cn I. You School of Information Science, Korean Bible University, Seoul, South Korea F. Tang · S. Guo School of Computer Science and Engineering, The University of Aizu, Fukushima 965-8580, Japan results demonstrate that our CCM algorithm outperforms both LEACH and PEGASIS in terms of the product of consumed energy and delay, weighting the overall performance of both energy consumption and transmission delay.",
"title": ""
},
{
"docid": "eccae386c0b8c053abda46537efbd792",
"text": "Software Defined Networking (SDN) has recently emerged as a new network management platform. The centralized control architecture presents many new opportunities. Among the network management tasks, measurement is one of the most important and challenging one. Researchers have proposed many solutions to better utilize SDN for network measurement. Among them, how to detect Distributed Denial-of-Services (DDoS) quickly and precisely is a very challenging problem. In this paper, we propose methods to detect DDoS attacks leveraging on SDN's flow monitoring capability. Our methods utilize measurement resources available in the whole SDN network to adaptively balance the coverage and granularity of attack detection. Through simulations we demonstrate that our methods can quickly locate potential DDoS victims and attackers by using a constrained number of flow monitoring rules.",
"title": ""
},
{
"docid": "27237bf03da7f6aea13c137668def5f0",
"text": "In deep learning community, gradient based methods are typically employed to train the proposed models. These methods generally operate in a mini-batch training manner wherein a small fraction of the training data is invoked to compute an approximative gradient. It is reported that models trained with large batch are prone to generalize worse than those trained with small batch. Several inspiring works are conducted to figure out the underlying reason of this phenomenon, but almost all of them focus on classification tasks. In this paper, we investigate the influence of batch size on regression task. More specifically, we tested the generalizability of deep auto-encoder trained with varying batch size and checked some well-known measures relating to model generalization. Our experimental results lead to three conclusions. First, there exist no obvious generalization gap in regression model such as auto-encoders. Second, with a same train loss as target, small batch generally lead to solutions closer to the starting point than large batch. Third, spectral norm of weight matrices is closely related to generalizability of the model, but different layers contribute variously to the generalization performance.",
"title": ""
},
{
"docid": "fc2a7c789f742dfed24599997845b604",
"text": "An axially symmetric power combiner, which utilizes a tapered conical impedance matching network to transform ten 50-Omega inputs to a central coaxial line over the X-band, is presented. The use of a conical line allows standard transverse electromagnetic design theory to be used, including tapered impedance matching networks. This, in turn, alleviates the problem of very low impedance levels at the common port of conical line combiners, which normally requires very high-precision manufacturing and assembly. The tapered conical line is joined to a tapered coaxial line for a completely smooth transmission line structure. Very few full-wave analyses are needed in the design process since circuit models are optimized to achieve a wide operating bandwidth. A ten-way prototype was developed at X-band with a 47% bandwidth, very low losses, and excellent agreement between simulated and measured results.",
"title": ""
},
{
"docid": "3cc6d54cb7a8507473f623a149c3c64b",
"text": "The measurement of loyalty is a topic of great interest for the marketing academic literature. The relation that loyalty has with the results of organizations has been tested by numerous studies and the search to retain profitable customers has become a maxim in firm management. Tourist destinations have not remained oblivious to this trend. However, the difficulty involved in measuring the loyalty of a tourist destination is a brake on its adoption by those in charge of destination management. The usefulness of measuring loyalty lies in being able to apply strategies which enable improving it, but that also impact on the enhancement of the organization’s results. The study of tourists’ loyalty to a destination is considered relevant for the literature and from the point of view of the management of the multiple actors involved in the tourist activity. Based on these considerations, this work proposes a synthetic indictor that allows the simple measurement of the tourist’s loyalty. To do so, we used as a starting point Best’s (2007) customer loyalty index adapted to the case of tourist destinations. We also employed a variable of results – the tourist’s overnight stays in the destination – to create a typology of customers according to their levels of loyalty and the number of their overnight stays. The data were obtained from a survey carried out with 2373 tourists of the city of Seville. In accordance with the results attained, the proposal of the synthetic indicator to measure tourist loyalty is viable, as it is a question of a simple index constructed from easily obtainable data. Furthermore, four groups of tourists have been identified, according to their degree of loyalty and profitability, using the number of overnight stays of the tourists in their visit to the destination. The study’s main contribution stems from the possibility of simply measuring loyalty and from establishing four profiles of tourists for which marketing strategies of differentiated relations can be put into practice and that contribute to the improvement of the destination’s results. © 2018 Journal of Innovation & Knowledge. Published by Elsevier España, S.L.U. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/",
"title": ""
},
{
"docid": "14b0f4542d34a114fd84f14d1f0b53e8",
"text": "Selection the ideal mate is the most confusing process in the life of most people. To explore these issues to examine differences under graduates socio-economic status have on their preference of marriage partner selection in terms of their personality traits, socio-economic status and physical attractiveness. A total of 770 respondents participated in this study. The respondents were mainly college students studying in final year degree in professional and non professional courses. The result revealed that the respondents socio-economic status significantly influence preferences in marriage partners selection in terms of personality traits, socio-economic status and physical attractiveness.",
"title": ""
},
{
"docid": "69b831bb25e5ad0f18054d533c313b53",
"text": "In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task in part because various objects reflect and disperse signals. Ultra WideBand (UWB) is an emerging technology in the field of indoor positioning that has shown better performance compared to others. In order to set the stage for this work, we provide a survey of the state-of-the-art technologies in indoor positioning, followed by a detailed comparative analysis of UWB positioning technologies. We also provide an analysis of strengths, weaknesses, opportunities, and threats (SWOT) to analyze the present state of UWB positioning technologies. While SWOT is not a quantitative approach, it helps in assessing the real status and in revealing the potential of UWB positioning to effectively address the indoor positioning problem. Unlike previous studies, this paper presents new taxonomies, reviews some major recent advances, and argues for further exploration by the research community of this challenging problem space.",
"title": ""
},
{
"docid": "148af36df5a403b33113ee5b9a7ad1d3",
"text": "The experience of interacting with a robot has been shown to be very different in comparison to people’s interaction experience with other technologies and artifacts, and often has a strong social or emotional component – a fact that raises concerns related to evaluation. In this paper we outline how this difference is due in part to the general complexity of robots’ overall context of interaction, related to their dynamic presence in the real world and their tendency to invoke a sense of agency. A growing body of work in Human-Robot Interaction (HRI) focuses on exploring this overall context and tries to unpack what exactly is unique about interaction with robots, often through leveraging evaluation methods and frameworks designed for more-traditional HCI. We raise the concern that, due to these differences, HCI evaluation methods should be applied to HRI with care, and we present a survey of HCI evaluation techJames E. Young University of Calgary, Canada, The University of Tokyo, Japan E-mail: jim.young@ucalgary.ca JaYoung Sung Georgia Institute of Technology, GA, U.S.A. E-mail: jsung@cc.gatech.edu Amy Voida University of Calgary, Canada E-mail: avoida@ucalgary.ca Ehud Sharlin University of Calgary, Canada E-mail: ehud@cpsc.ucalgary.ca Takeo Igarashi The University of Tokyo, Japan, JST ERATO, Japan E-mail: takeo@acm.org Henrik I. Christensen Georgia Institute of Technology, GA, U.S.A. E-mail: hic@cc.gatech.edu Rebecca E. Grinter Georgia Institute of Technology, GA, U.S.A. E-mail: beki@cc.gatech.edu niques from the perspective of the unique challenges of robots. Further, we have developed a new set of tools to aid evaluators in targeting and unpacking the holistic human-robot interaction experience. Our technique surrounds the development of a map of interaction experience possibilities and, as part of this, we present a set of three perspectives for targeting specific components of interaction experience, and demonstrate how these tools can be practically used in evaluation. CR Subject Classification H.1.2 [Models and principles]: user/machine systems–software psychology",
"title": ""
},
{
"docid": "00639757a1a60fe8e56b868bd6e2ff62",
"text": "Giant congenital melanocytic nevus is usually defined as a melanocytic lesion present at birth that will reach a diameter ≥ 20 cm in adulthood. Its incidence is estimated in <1:20,000 newborns. Despite its rarity, this lesion is important because it may associate with severe complications such as malignant melanoma, affect the central nervous system (neurocutaneous melanosis), and have major psychosocial impact on the patient and his family due to its unsightly appearance. Giant congenital melanocytic nevus generally presents as a brown lesion, with flat or mammilated surface, well-demarcated borders and hypertrichosis. Congenital melanocytic nevus is primarily a clinical diagnosis. However, congenital nevi are histologically distinguished from acquired nevi mainly by their larger size, the spread of the nevus cells to the deep layers of the skin and by their more varied architecture and morphology. Although giant congenital melanocytic nevus is recognized as a risk factor for the development of melanoma, the precise magnitude of this risk is still controversial. The estimated lifetime risk of developing melanoma varies from 5 to 10%. On account of these uncertainties and the size of the lesions, the management of giant congenital melanocytic nevus needs individualization. Treatment may include surgical and non-surgical procedures, psychological intervention and/or clinical follow-up, with special attention to changes in color, texture or on the surface of the lesion. The only absolute indication for surgery in giant congenital melanocytic nevus is the development of a malignant neoplasm on the lesion.",
"title": ""
},
{
"docid": "922c0a315751c90a11b018547f8027b2",
"text": "We propose a model for the recently discovered Θ+ exotic KN resonance as a novel kind of a pentaquark with an unusual color structure: a 3c ud diquark, coupled to 3c uds̄ triquark in a relative P -wave. The state has J P = 1/2+, I = 0 and is an antidecuplet of SU(3)f . A rough mass estimate of this pentaquark is close to experiment.",
"title": ""
},
{
"docid": "9b19f343a879430283881a69e3f9cb78",
"text": "Effective analysis of applications (shortly apps) is essential to understanding apps' behavior. Two analysis approaches, i.e., static and dynamic, are widely used; although, both have well known limitations. Static analysis suffers from obfuscation and dynamic code updates. Whereas, it is extremely hard for dynamic analysis to guarantee the execution of all the code paths in an app and thereby, suffers from the code coverage problem. However, from a security point of view, executing all paths in an app might be less interesting than executing certain potentially malicious paths in the app. In this work, we use a hybrid approach that combines static and dynamic analysis in an iterative manner to cover their shortcomings. We use targeted execution of interesting code paths to solve the issues of obfuscation and dynamic code updates. Our targeted execution leverages a slicing-based analysis for the generation of data-dependent slices for arbitrary methods of interest (MOI) and on execution of the extracted slices for capturing their dynamic behavior. Motivated by the fact that malicious apps use Inter Component Communications (ICC) to exchange data [19], our main contribution is the automatic targeted triggering of MOI that use ICC for passing data between components. We implement a proof of concept, TelCC, and report the results of our evaluation.",
"title": ""
}
] | scidocsrr |
0687cc1454d931b15022c0ad9fc1d8c1 | Effort during visual search and counting: insights from pupillometry. | [
{
"docid": "c0dbb410ebd6c84bd97b5f5e767186b3",
"text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.",
"title": ""
}
] | [
{
"docid": "ca26daaa9961f7ba2343ae84245c1181",
"text": "In a recently held WHO workshop it has been recommended to abandon the distinction between potentially malignant lesions and potentially malignant conditions and to use the term potentially malignant disorders instead. Of these disorders, leukoplakia and erythroplakia are the most common ones. These diagnoses are still defined by exclusion of other known white or red lesions. In spite of tremendous progress in the field of molecular biology there is yet no single marker that reliably enables to predict malignant transformation in an individual patient. The general advice is to excise or laser any oral of oropharyngeal leukoplakia/erythroplakia, if feasible, irrespective of the presence or absence of dysplasia. Nevertheless, it is actually unknown whether such removal truly prevents the possible development of a squamous cell carcinoma. At present, oral lichen planus seems to be accepted in the literature as being a potentially malignant disorder, although the risk of malignant transformation is lower than in leukoplakia. There are no means to prevent such event. The efficacy of follow-up of oral lichen planus is questionable. Finally, brief attention has been paid to oral submucous fibrosis, actinic cheilitis, some inherited cancer syndromes and immunodeficiency in relation to cancer predisposition.",
"title": ""
},
{
"docid": "3a71dd4c8d9e1cf89134141cfd97023e",
"text": "We introduce a novel solid modeling framework taking advantage of the architecture of parallel computing onmodern graphics hardware. Solidmodels in this framework are represented by an extension of the ray representation — Layered Depth-Normal Images (LDNI), which inherits the good properties of Boolean simplicity, localization and domain decoupling. The defect of ray representation in computational intensity has been overcome by the newly developed parallel algorithms running on the graphics hardware equipped with Graphics Processing Unit (GPU). The LDNI for a solid model whose boundary is representedby a closedpolygonalmesh canbe generated efficientlywith thehelp of hardware accelerated sampling. The parallel algorithm for computing Boolean operations on two LDNI solids runs well on modern graphics hardware. A parallel algorithm is also introduced in this paper to convert LDNI solids to sharp-feature preserved polygonal mesh surfaces, which can be used in downstream applications (e.g., finite element analysis). Different from those GPU-based techniques for rendering CSG-tree of solid models Hable and Rossignac (2007, 2005) [1,2], we compute and store the shape of objects in solid modeling completely on graphics hardware. This greatly eliminates the communication bottleneck between the graphics memory and the main memory. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fab72d1223fa94e918952b8715e90d30",
"text": "A novel wideband crossed dipole loaded with four parasitic elements is investigated in this letter. The printed crossed dipole is incorporated with a pair of vacant quarter rings to feed the antenna. The antenna is backed by a metallic plate to provide an unidirectional radiation pattern with a wide axial-ratio (AR) bandwidth. To verify the proposed design, a prototype is fabricated and measured. The final design with an overall size of $0.46\\ \\lambda_{0}\\times 0.46\\ \\lambda_{0}\\times 0.23\\ \\lambda_{0} (> \\lambda_{0}$ is the free-space wavelength of circularly polarized center frequency) yields a 10-dB impedance bandwidth of approximately 62.7% and a 3-dB AR bandwidth of approximately 47.2%. In addition, the proposed antenna has a stable broadside gain of 7.9 ± 0.5 dBi within passband.",
"title": ""
},
{
"docid": "4f15ef7dc7405f22e1ca7ae24154f5ef",
"text": "This position paper addresses current debates about data in general, and big data specifically, by examining the ethical issues arising from advances in knowledge production. Typically ethical issues such as privacy and data protection are discussed in the context of regulatory and policy debates. Here we argue that this overlooks a larger picture whereby human autonomy is undermined by the growth of scientific knowledge. To make this argument, we first offer definitions of data and big data, and then examine why the uses of data-driven analyses of human behaviour in particular have recently experienced rapid growth. Next, we distinguish between the contexts in which big data research is used, and argue that this research has quite different implications in the context of scientific as opposed to applied research. We conclude by pointing to the fact that big data analyses are both enabled and constrained by the nature of data sources available. Big data research will nevertheless inevitably become more pervasive, and this will require more awareness on the part of data scientists, policymakers and a wider public about its contexts and often unintended consequences.",
"title": ""
},
{
"docid": "46b5082df5dfd63271ec942ce28285fa",
"text": "The area under the ROC curve (AUC) is a very widely used measure of performance for classification and diagnostic rules. It has the appealing property of being objective, requiring no subjective input from the user. On the other hand, the AUC has disadvantages, some of which are well known. For example, the AUC can give potentially misleading results if ROC curves cross. However, the AUC also has a much more serious deficiency, and one which appears not to have been previously recognised. This is that it is fundamentally incoherent in terms of misclassification costs: the AUC uses different misclassification cost distributions for different classifiers. This means that using the AUC is equivalent to using different metrics to evaluate different classification rules. It is equivalent to saying that, using one classifier, misclassifying a class 1 point is p times as serious as misclassifying a class 0 point, but, using another classifier, misclassifying a class 1 point is P times as serious, where p≠P. This is nonsensical because the relative severities of different kinds of misclassifications of individual points is a property of the problem, not the classifiers which happen to have been chosen. This property is explored in detail, and a simple valid alternative to the AUC is proposed.",
"title": ""
},
{
"docid": "2ee5e5ecd9304066b12771f3349155f8",
"text": "An intelligent wiper speed adjustment system can be found in most middle and upper class cars. A core piece of this gadget is the rain sensor on the windshield. With the upcoming number of cars being equipped with an in-vehicle camera for vision-based applications the call for integrating all sensors in the area of the rearview mirror into one device rises to reduce the number of parts and variants. In this paper, functionality of standard rain sensors and different vision-based approaches are explained and a novel rain sensing concept based on an automotive in-vehicle camera for Driver Assistance Systems (DAS) is developed to enhance applicability. Hereby, the region at the bottom of the field of view (FOV) of the imager is used to detect raindrops, while the upper part of the image is still usable for other vision-based applications. A simple algorithm is set up to keep the additional processing time low and to quantitatively gather the rain intensity. Mechanisms to avoid false activations of the wipers are introduced. First experimental experiences based on real scenarios show promising results.",
"title": ""
},
{
"docid": "10b4d77741d40a410b30b0ba01fae67f",
"text": "While glucosamine supplementation is very common and a multitude of commercial products are available, there is currently limited information available to assist the equine practitioner in deciding when and how to use these products. Low bioavailability of orally administered glucosamine, poor product quality, low recommended doses, and a lack of scientific evidence showing efficacy of popular oral joint supplements are major concerns. Authors’ addresses: Rolling Thunder Veterinary Services, 225 Roxbury Road, Garden City, NY 11530 (Oke); Ontario Veterinary College, Department of Clinical Studies, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Weese); e-mail: rollingthunder@optonline.net (Oke). © 2006 AAEP.",
"title": ""
},
{
"docid": "cab386acd4cf89803325e5d33a095a62",
"text": "Dipyridamole is a widely prescribed drug in ischemic disorders, and it is here investigated for potential clinical use as a new treatment for breast cancer. Xenograft mice bearing triple-negative breast cancer 4T1-Luc or MDA-MB-231T cells were generated. In these in vivo models, dipyridamole effects were investigated for primary tumor growth, metastasis formation, cell cycle, apoptosis, signaling pathways, immune cell infiltration, and serum inflammatory cytokines levels. Dipyridamole significantly reduced primary tumor growth and metastasis formation by intraperitoneal administration. Treatment with 15 mg/kg/day dipyridamole reduced mean primary tumor size by 67.5 % (p = 0.0433), while treatment with 30 mg/kg/day dipyridamole resulted in an almost a total reduction in primary tumors (p = 0.0182). Experimental metastasis assays show dipyridamole reduces metastasis formation by 47.5 % in the MDA-MB-231T xenograft model (p = 0.0122), and by 50.26 % in the 4T1-Luc xenograft model (p = 0.0292). In vivo dipyridamole decreased activated β-catenin by 38.64 % (p < 0.0001), phospho-ERK1/2 by 25.05 % (p = 0.0129), phospho-p65 by 67.82 % (p < 0.0001) and doubled the expression of IkBα (p = 0.0019), thus revealing significant effects on Wnt, ERK1/2-MAPK and NF-kB pathways in both animal models. Moreover dipyridamole significantly decreased the infiltration of tumor-associated macrophages and myeloid-derived suppressor cells in primary tumors (p < 0.005), and the inflammatory cytokines levels in the sera of the treated mice. We suggest that when used at appropriate doses and with the correct mode of administration, dipyridamole is a promising agent for breast-cancer treatment, thus also implying its potential use in other cancers that show those highly activated pathways.",
"title": ""
},
{
"docid": "2949a903b7ab1949b6aaad305c532f4b",
"text": "This paper presents a semantics-based approach to Recommender Systems (RS), to exploit available contextual information about both the items to be recommended and the recommendation process, in an attempt to overcome some of the shortcomings of traditional RS implementations. An ontology is used as a backbone to the system, while multiple web services are orchestrated to compose a suitable recommendation model, matching the current recommendation context at run-time. To achieve such dynamic behaviour the proposed system tackles the recommendation problem by applying existing RS techniques on three different levels: the selection of appropriate sets of features, recommendation model and recommendable items.",
"title": ""
},
{
"docid": "41076f408c1c00212106433b47582a43",
"text": "Polyols such as mannitol, erythritol, sorbitol, and xylitol are naturally found in fruits and vegetables and are produced by certain bacteria, fungi, yeasts, and algae. These sugar alcohols are widely used in food and pharmaceutical industries and in medicine because of their interesting physicochemical properties. In the food industry, polyols are employed as natural sweeteners applicable in light and diabetic food products. In the last decade, biotechnological production of polyols by lactic acid bacteria (LAB) has been investigated as an alternative to their current industrial production. While heterofermentative LAB may naturally produce mannitol and erythritol under certain culture conditions, sorbitol and xylitol have been only synthesized through metabolic engineering processes. This review deals with the spontaneous formation of mannitol and erythritol in fermented foods and their biotechnological production by heterofermentative LAB and briefly presented the metabolic engineering processes applied for polyol formation.",
"title": ""
},
{
"docid": "cc2822b15ccf29978252b688111d58cd",
"text": "Today, even a moderately sized corporate intranet contains multiple firewalls and routers, which are all used to enforce various aspects of the global corporate security policy. Configuring these devices to work in unison is difficult, especially if they are made by different vendors. Even testing or reverse-engineering an existing configuration (say, when a new security administrator takes over) is hard. Firewall configuration files are written in low-level formalisms, whose readability is comparable to assembly code, and the global policy is spread over all the firewalls that are involved. To alleviate some of these difficulties, we designed and implemented a novel firewall analysis tool. Our software allows the administrator to easily discover and test the global firewall policy (either a deployed policy or a planned one). Our tool uses a minimal description of the network topology, and directly parses the various vendor-specific lowlevel configuration files. It interacts with the user through a query-and-answer session, which is conducted at a much higher level of abstraction. A typical question our tool can answer is “from which machines can our DMZ be reached, and with which services?”. Thus, our tool complements existing vulnerability analysis tools, as it can be used before a policy is actually deployed, it operates on a more understandable level of abstraction, and it deals with all the firewalls at once.",
"title": ""
},
{
"docid": "b08f67bc9b84088f8298b35e50d0b9c5",
"text": "This review examines different nutritional guidelines, some case studies, and provides insights and discrepancies, in the regulatory framework of Food Safety Management of some of the world's economies. There are thousands of fermented foods and beverages, although the intention was not to review them but check their traditional and cultural value, and if they are still lacking to be classed as a category on different national food guides. For understanding the inconsistencies in claims of concerning fermented foods among various regulatory systems, each legal system should be considered unique. Fermented foods and beverages have long been a part of the human diet, and with further supplementation of probiotic microbes, in some cases, they offer nutritional and health attributes worthy of recommendation of regular consumption. Despite the impact of fermented foods and beverages on gastro-intestinal wellbeing and diseases, their many health benefits or recommended consumption has not been widely translated to global inclusion in world food guidelines. In general, the approach of the legal systems is broadly consistent and their structures may be presented under different formats. African traditional fermented products are briefly mentioned enhancing some recorded adverse effects. Knowing the general benefits of traditional and supplemented fermented foods, they should be a daily item on most national food guides.",
"title": ""
},
{
"docid": "cceec94ed2462cd657be89033244bbf9",
"text": "This paper examines how student effort, consistency, motivation, and marginal learning, influence student grades in an online course. We use data from eleven Microeconomics courses taught online for a total of 212 students. Our findings show that consistency, or less time variation, is a statistically significant explanatory variable, whereas effort, or total minutes spent online, is not. Other independent variables include GPA and the difference between a pre-test and a post-test. The GPA is used as a measure of motivation, and the difference between a posttest and pre-test as marginal learning. As expected, the level of motivation is found statistically significant at a 99% confidence level, and marginal learning is also significant at a 95% level.",
"title": ""
},
{
"docid": "5efd5fb9caaeadb90a684d32491f0fec",
"text": "The ModelNiew/Controller design pattern is very useful for architecting interactive software systems. This design pattern is partition-independent, because it is expressed in terms of an interactive application running in a single address space. Applying the ModelNiew/Controller design pattern to web-applications is therefore complicated by the fact that current technologies encourage developers to partition the application as early as in the design phase. Subsequent changes to that partitioning require considerable changes to the application's implementation despite the fact that the application logic has not changed. This paper introduces the concept of Flexible Web-Application Partitioning, a programming model and implementation infrastructure, that allows developers to apply the ModeWViewKontroller design pattern in a partition-independent manner: Applications are developed and tested in a single address-space; they can then be deployed to various clientherver architectures without changing the application's source code. In addition, partitioning decisions can be changed without modifying the application.",
"title": ""
},
{
"docid": "90b3e6aee6351b196445843ca8367a3b",
"text": "Modeling how visual saliency guides the deployment of atten tion over visual scenes has attracted much interest recently — among both computer v ision and experimental/computational researchers — since visual attention is a key function of both machine and biological vision systems. Research efforts in compute r vision have mostly been focused on modeling bottom-up saliency. Strong influences o n attention and eye movements, however, come from instantaneous task demands. Here , w propose models of top-down visual guidance considering task influences. The n ew models estimate the state of a human subject performing a task (here, playing video gam es), and map that state to an eye position. Factors influencing state come from scene gi st, physical actions, events, and bottom-up saliency. Proposed models fall into two categ ori s. In the first category, we use classical discriminative classifiers, including Reg ression, kNN and SVM. In the second category, we use Bayesian Networks to combine all the multi-modal factors in a unified framework. Our approaches significantly outperfor m 15 competing bottom-up and top-down attention models in predicting future eye fixat ions on 18,000 and 75,00 video frames and eye movement samples from a driving and a flig ht combat video game, respectively. We further test and validate our approaches o n 1.4M video frames and 11M fixations samples and in all cases obtain higher prediction s c re that reference models.",
"title": ""
},
{
"docid": "2af5e18cfb6dadd4d5145a1fa63f0536",
"text": "Hyperspectral remote sensing technology has advanced significantly in the past two decades. Current sensors onboard airborne and spaceborne platforms cover large areas of the Earth surface with unprecedented spectral, spatial, and temporal resolutions. These characteristics enable a myriad of applications requiring fine identification of materials or estimation of physical parameters. Very often, these applications rely on sophisticated and complex data analysis methods. The sources of difficulties are, namely, the high dimensionality and size of the hyperspectral data, the spectral mixing (linear and nonlinear), and the degradation mechanisms associated to the measurement process such as noise and atmospheric effects. This paper presents a tutorial/overview cross section of some relevant hyperspectral data analysis methods and algorithms, organized in six main topics: data fusion, unmixing, classification, target detection, physical parameter retrieval, and fast computing. In all topics, we describe the state-of-the-art, provide illustrative examples, and point to future challenges and research directions.",
"title": ""
},
{
"docid": "356684bac2e5fecd903eb428dc5455f4",
"text": "Social media expose millions of users every day to information campaigns - some emerging organically from grassroots activity, others sustained by advertising or other coordinated efforts. These campaigns contribute to the shaping of collective opinions. While most information campaigns are benign, some may be deployed for nefarious purposes, including terrorist propaganda, political astroturf, and financial market manipulation. It is therefore important to be able to detect whether a meme is being artificially promoted at the very moment it becomes wildly popular. This problem has important social implications and poses numerous technical challenges. As a first step, here we focus on discriminating between trending memes that are either organic or promoted by means of advertisement. The classification is not trivial: ads cause bursts of attention that can be easily mistaken for those of organic trends. We designed a machine learning framework to classify memes that have been labeled as trending on Twitter. After trending, we can rely on a large volume of activity data. Early detection, occurring immediately at trending time, is a more challenging problem due to the minimal volume of activity data that is available prior to trending. Our supervised learning framework exploits hundreds of time-varying features to capture changing network and diffusion patterns, content and sentiment information, timing signals, and user meta-data. We explore different methods for encoding feature time series. Using millions of tweets containing trending hashtags, we achieve 75% AUC score for early detection, increasing to above 95% after trending. We evaluate the robustness of the algorithms by introducing random temporal shifts on the trend time series. Feature selection analysis reveals that content cues provide consistently useful signals; user features are more informative for early detection, while network and timing features are more helpful once more data is available.",
"title": ""
},
{
"docid": "a6fec60aeb6e5824ed07eaa3257969aa",
"text": "What aspects of information assurance can be identified in Business-to-Consumer (B-toC) online transactions? The purpose of this research is to build a theoretical framework for studying information assurance based on a detailed analysis of academic literature for online exchanges in B-to-C electronic commerce. Further, a semantic network content analysis is conducted to analyze the representations of information assurance in B-to-C electronic commerce in the real online market place (transaction Web sites of selected Fortune 500 firms). The results show that the transaction websites focus on some perspectives and not on others. For example, we see an emphasis on the importance of technological and consumer behavioral elements of information assurance such as issues of online security and privacy. Further corporate practitioners place most emphasis on transaction-related information assurance issues. Interestingly, the product and institutional dimension of information assurance in online transaction websites are only",
"title": ""
},
{
"docid": "fee4b80923ff9b6611e95836a90beb06",
"text": "We present an annotation management system for relational databases. In this system, every piece of data in a relation is assumed to have zero or more annotations associated with it and annotations are propagated along, from the source to the output, as data is being transformed through a query. Such an annotation management system could be used for understanding the provenance (aka lineage) of data, who has seen or edited a piece of data or the quality of data, which are useful functionalities for applications that deal with integration of scientific and biological data. We present an extension, pSQL, of a fragment of SQL that has three different types of annotation propagation schemes, each useful for different purposes. The default scheme propagates annotations according to where data is copied from. The default-all scheme propagates annotations according to where data is copied from among all equivalent formulations of a given query. The custom scheme allows a user to specify how annotations should propagate. We present a storage scheme for the annotations and describe algorithms for translating a pSQL query under each propagation scheme into one or more SQL queries that would correctly retrieve the relevant annotations according to the specified propagation scheme. For the default-all scheme, we also show how we generate finitely many queries that can simulate the annotation propagation behavior of the set of all equivalent queries, which is possibly infinite. The algorithms are implemented and the feasibility of the system is demonstrated by a set of experiments that we have conducted.",
"title": ""
}
] | scidocsrr |
c4fecb931da091a5614c02f88718a6a7 | Major Traits / Qualities of Leadership | [
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "ecbdb56c52a59f26cf8e33fc533d608f",
"text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.",
"title": ""
}
] | [
{
"docid": "5fe1fa98c953d778ee27a104802e5f2b",
"text": "We describe two general approaches to creating document-level maps of science. To create a local map one defines and directly maps a sample of data, such as all literature published in a set of information science journals. To create a global map of a research field one maps ‘all of science’ and then locates a literature sample within that full context. We provide a deductive argument that global mapping should create more accurate partitions of a research field than local mapping, followed by practical reasons why this may not be so. The field of information science is then mapped at the document level using both local and global methods to provide a case illustration of the differences between the methods. Textual coherence is used to assess the accuracies of both maps. We find that document clusters in the global map have significantly higher coherence than those in the local map, and that the global map provides unique insights into the field of information science that cannot be discerned from the local map. Specifically, we show that information science and computer science have a large interface and that computer science is the more progressive discipline at that interface. We also show that research communities in temporally linked threads have a much higher coherence than isolated communities, and that this feature can be used to predict which threads will persist into a subsequent year. Methods that could increase the accuracy of both local and global maps in the future are also discussed.",
"title": ""
},
{
"docid": "b252aea38a537a22ab34fdf44e9443d2",
"text": "The objective of this study is to describe the case of a patient presenting advanced epidermoid carcinoma of the penis associated to myiasis. A 41-year-old patient presenting with a necrotic lesion of the distal third of the penis infested with myiasis was attended in the emergency room of our hospital and was submitted to an urgent penectomy. This is the first case of penile cancer associated to myiasis described in the literature. This case reinforces the need for educative campaigns to reduce the incidence of this disease in developing countries.",
"title": ""
},
{
"docid": "e6db8cbbb3f7bac211f672ffdef44fb6",
"text": "This paper aims to develop a benchmarking framework that evaluates the cold chain performance of a company, reveals its strengths and weaknesses and finally identifies and prioritizes potential alternatives for continuous improvement. A Delphi-AHP-TOPSIS based methodology has divided the whole benchmarking into three stages. The first stage is Delphi method, where identification, synthesis and prioritization of key performance factors and sub-factors are done and a novel consistent measurement scale is developed. The second stage is Analytic Hierarchy Process (AHP) based cold chain performance evaluation of a selected company against its competitors, so as to observe cold chain performance of individual factors and sub-factors, as well as overall performance index. And, the third stage is Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based assessment of possible alternatives for the continuous improvement of the company’s cold chain performance. Finally a demonstration of proposed methodology in a retail industry is presented for better understanding. The proposed framework can assist managers to comprehend the present strengths and weaknesses of their cold. They can identify good practices from the market leader and can benchmark them for improving weaknesses keeping in view the current operational conditions and strategies of the company. This framework also facilitates the decision makers to better understand the complex relationships of the relevant cold chain performance factors in decision-making. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "72420289372499b50e658ef0957a3ad9",
"text": "A ripple current cancellation technique injects AC current into the output voltage bus of a converter that is equal and opposite to the normal converter ripple current. The output current ripple is ideally zero, leading to ultra-low noise converter output voltages. The circuit requires few additional components, no active circuits are required. Only an additional filter inductor winding, an auxiliary inductor, and small capacitor are required. The circuit utilizes leakage inductance of the modified filter inductor as all or part of the required auxiliary inductance. Ripple cancellation is independent of switching frequency, duty cycle, and other converter parameters. The circuit eliminates ripple current in both continuous conduction mode and discontinuous conduction mode. Experimental results provide better than an 80/spl times/ ripple current reduction.",
"title": ""
},
{
"docid": "19f1a6c9c5faf73b8868164e8bb310c6",
"text": "Holoprosencephaly refers to a spectrum of craniofacial malformations including cyclopia, ethmocephaly, cebocephaly, and premaxillary agenesis. Etiologic heterogeneity is well documented. Chromosomal, genetic, and teratogenic factors have been implicated. Recognition of holoprosencephaly as a developmental field defect stresses the importance of close scrutiny of relatives for mild forms such as single median incisor, hypotelorism, bifid uvula, or pituitary deficiency.",
"title": ""
},
{
"docid": "c0b40058d003cdaa80d54aa190e48bc2",
"text": "Visual tracking plays an important role in many computer vision tasks. A common assumption in previous methods is that the video frames are blur free. In reality, motion blurs are pervasive in the real videos. In this paper we present a novel BLUr-driven Tracker (BLUT) framework for tracking motion-blurred targets. BLUT actively uses the information from blurs without performing debluring. Specifically, we integrate the tracking problem with the motion-from-blur problem under a unified sparse approximation framework. We further use the motion information inferred by blurs to guide the sampling process in the particle filter based tracking. To evaluate our method, we have collected a large number of video sequences with significatcant motion blurs and compared BLUT with state-of-the-art trackers. Experimental results show that, while many previous methods are sensitive to motion blurs, BLUT can robustly and reliably track severely blurred targets.",
"title": ""
},
{
"docid": "ea42c551841cc53c84c63f72ee9be0ae",
"text": "Phishing is a prevalent issue of today’s Internet. Previous approaches to counter phishing do not draw on a crucial factor to combat the threat the users themselves. We believe user education about the dangers of the Internet is a further key strategy to combat phishing. For this reason, we developed an Android app, a game called –NoPhish–, which educates the user in the detection of phishing URLs. It is crucial to evaluate NoPhish with respect to its effectiveness and the users’ knowledge retention. Therefore, we conducted a lab study as well as a retention study (five months later). The outcomes of the studies show that NoPhish helps users make better decisions with regard to the legitimacy of URLs immediately after playing NoPhish as well as after some time has passed. The focus of this paper is on the description and the evaluation of both studies. This includes findings regarding those types of URLs that are most difficult to decide on as well as ideas to further improve NoPhish.",
"title": ""
},
{
"docid": "b468726c2901146f1ca02df13936e968",
"text": "Chinchillas have been successfully maintained in captivity for almost a century. They have only recently been recognized as excellent, long-lived, and robust pets. Most of the literature on diseases of chinchillas comes from farmed chinchillas, whereas reports of pet chinchilla diseases continue to be sparse. This review aims to provide information on current, poorly reported disorders of pet chinchillas, such as penile problems, urolithiasis, periodontal disease, otitis media, cardiac disease, pseudomonadal infections, and giardiasis. This review is intended to serve as a complement to current veterinary literature while providing valuable and clinically relevant information for veterinarians treating chinchillas.",
"title": ""
},
{
"docid": "872370f375d779435eb098571f3ab763",
"text": "The aim of this study was to explore the potential of fused-deposition 3-dimensional printing (FDM 3DP) to produce modified-release drug loaded tablets. Two aminosalicylate isomers used in the treatment of inflammatory bowel disease (IBD), 5-aminosalicylic acid (5-ASA, mesalazine) and 4-aminosalicylic acid (4-ASA), were selected as model drugs. Commercially produced polyvinyl alcohol (PVA) filaments were loaded with the drugs in an ethanolic drug solution. A final drug-loading of 0.06% w/w and 0.25% w/w was achieved for the 5-ASA and 4-ASA strands, respectively. 10.5mm diameter tablets of both PVA/4-ASA and PVA/5-ASA were subsequently printed using an FDM 3D printer, and varying the weight and densities of the printed tablets was achieved by selecting the infill percentage in the printer software. The tablets were mechanically strong, and the FDM 3D printing was shown to be an effective process for the manufacture of the drug, 5-ASA. Significant thermal degradation of the active 4-ASA (50%) occurred during printing, however, indicating that the method may not be appropriate for drugs when printing at high temperatures exceeding those of the degradation point. Differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA) of the formulated blends confirmed these findings while highlighting the potential of thermal analytical techniques to anticipate drug degradation issues in the 3D printing process. The results of the dissolution tests conducted in modified Hank's bicarbonate buffer showed that release profiles for both drugs were dependent on both the drug itself and on the infill percentage of the tablet. Our work here demonstrates the potential role of FDM 3DP as an efficient and low-cost alternative method of manufacturing individually tailored oral drug dosage, and also for production of modified-release formulations.",
"title": ""
},
{
"docid": "1b30c14536db1161b77258b1ce213fbb",
"text": "Click-through rate (CTR) prediction and relevance ranking are two fundamental problems in web advertising. In this study, we address the problem of modeling the relationship between CTR and relevance for sponsored search. We used normalized relevance scores comparable across all queries to represent relevance when modeling with CTR, instead of directly using human judgment labels or relevance scores valid only within same query. We classified clicks by identifying their relevance quality using dwell time and session information, and compared all clicks versus selective clicks effects when modeling relevance.\n Our results showed that the cleaned click signal outperforms raw click signal and others we explored, in terms of relevance score fitting. The cleaned clicks include clicks with dwell time greater than 5 seconds and last clicks in session. Besides traditional thoughts that there is no linear relation between click and relevance, we showed that the cleaned click based CTR can be fitted well with the normalized relevance scores using a quadratic regression model. This relevance-click model could help to train ranking models using processed click feedback to complement expensive human editorial relevance labels, or better leverage relevance signals in CTR prediction.",
"title": ""
},
{
"docid": "ae800ced5663d320fcaca2df6f6bf793",
"text": "Stowage planning for container vessels concerns the core competence of the shipping lines. As such, automated stowage planning has attracted much research in the past two decades, but with few documented successes. In an ongoing project, we are developing a prototype stowage planning system aiming for large containerships. The system consists of three modules: the stowage plan generator, the stability adjustment module, and the optimization engine. This paper mainly focuses on the stability adjustment module. The objective of the stability adjustment module is to check the global ship stability of the stowage plan produced by the stowage plan generator and resolve the stability issues by applying a heuristic algorithm to search for alternative feasible locations for containers that violate some of the stability criteria. We demonstrate that the procedure proposed is capable of solving the stability problems for a large containership with more than 5000 TEUs. Keywords— Automation, Stowage Planning, Local Search, Heuristic algorithm, Stability Optimization",
"title": ""
},
{
"docid": "f289b58d16bf0b3a017a9b1c173cbeb6",
"text": "All hospitalisations for pulmonary arterial hypertension (PAH) in the Scottish population were examined to determine the epidemiological features of PAH. These data were compared with expert data from the Scottish Pulmonary Vascular Unit (SPVU). Using the linked Scottish Morbidity Record scheme, data from all adults aged 16-65 yrs admitted with PAH (idiopathic PAH, pulmonary hypertension associated with congenital heart abnormalities and pulmonary hypertension associated with connective tissue disorders) during the period 1986-2001 were identified. These data were compared with the most recent data in the SPVU database (2005). Overall, 374 Scottish males and females aged 16-65 yrs were hospitalised with incident PAH during 1986-2001. The annual incidence of PAH was 7.1 cases per million population. On December 31, 2002, there were 165 surviving cases, giving a prevalence of PAH of 52 cases per million population. Data from the SPVU were available for 1997-2006. In 2005, the last year with a complete data set, the incidence of PAH was 7.6 cases per million population and the corresponding prevalence was 26 cases per million population. Hospitalisation data from the Scottish Morbidity Record scheme gave higher prevalences of pulmonary arterial hypertension than data from the expert centres (Scotland and France). The hospitalisation data may overestimate the true frequency of pulmonary arterial hypertension in the population, but it is also possible that the expert centres underestimate the true frequency.",
"title": ""
},
{
"docid": "99dcde334931eeb8e20ce7aa3c7982d5",
"text": "We describe a framework for multiscale image analysis in which line segments play a role analogous to the role played by points in wavelet analysis. The framework has five key components. The beamlet dictionary is a dyadicallyorganized collection of line segments, occupying a range of dyadic locations and scales, and occurring at a range of orientations. The beamlet transform of an image f(x, y) is the collection of integrals of f over each segment in the beamlet dictionary; the resulting information is stored in a beamlet pyramid. The beamlet graph is the graph structure with pixel corners as vertices and beamlets as edges; a path through this graph corresponds to a polygon in the original image. By exploiting the first four components of the beamlet framework, we can formulate beamlet-based algorithms which are able to identify and extract beamlets and chains of beamlets with special properties. In this paper we describe a four-level hierarchy of beamlet algorithms. The first level consists of simple procedures which ignore the structure of the beamlet pyramid and beamlet graph; the second level exploits only the parent-child dependence between scales; the third level incorporates collinearity and co-curvity relationships; and the fourth level allows global optimization over the full space of polygons in an image. These algorithms can be shown in practice to have suprisingly powerful and apparently unprecedented capabilities, for example in detection of very faint curves in very noisy data. We compare this framework with important antecedents in image processing (Brandt and Dym; Horn and collaborators; Götze and Druckenmiller) and in geometric measure theory (Jones; David and Semmes; and Lerman).",
"title": ""
},
{
"docid": "faa1a49f949d5ba997f4285ef2e708b2",
"text": "Appendiceal mucinous neoplasms sometimes present with peritoneal dissemination, which was previously a lethal condition with a median survival of about 3 years. Traditionally, surgical treatment consisted of debulking that was repeated until no further benefit could be achieved; systemic chemotherapy was sometimes used as a palliative option. Now, visible disease tends to be removed through visceral resections and peritonectomy. To avoid entrapment of tumour cells at operative sites and to destroy small residual mucinous tumour nodules, cytoreductive surgery is combined with intraperitoneal chemotherapy with mitomycin at 42 degrees C. Fluorouracil is then given postoperatively for 5 days. If the mucinous neoplasm is minimally invasive and cytoreduction complete, these treatments result in a 20-year survival of 70%. In the absence of a phase III study, this new combined treatment should be regarded as the standard of care for epithelial appendiceal neoplasms and pseudomyxoma peritonei syndrome.",
"title": ""
},
{
"docid": "981e88bd1f4187972f8a3d04960dd2dd",
"text": "The purpose of this study is to examine the appropriateness and effectiveness of the assistive use of robot projector based augmented reality (AR) to children’s dramatic activity. A system that employ a mobile robot mounted with a projector-camera is used to help manage children’s dramatic activity by projecting backdrops and creating a synthetic video imagery, where e.g. children’s faces is replaced with graphic characters. In this Delphi based study, a panel consist of 33 professionals include 11children education experts (college professors majoring in early childhood education), children field educators (kindergarten teachers and principals), and 11 AR and robot technology experts. The experts view the excerpts from the video taken from the actual usage situation. In the first stage of survey, we collect the panel's perspectives on applying the latest new technologies for instructing dramatic activity to children using an open ended questionnaire. Based on the results of the preliminary survey, the subsequent questionnaires (with 5 point Likert scales) are developed for the second and third in-depth surveys. In the second survey, 36 questions is categorized into 5 areas: (1) developmental and educational values, (2) impact on the teacher's role, (3) applicability and special considerations in the kindergarten, (4) external environment and required support, and (5) criteria for the selection of the story in the drama activity. The third survey mainly investigate how AR or robots can be of use in children’s dramatic activity in other ways (than as originally given) and to other educational domains. The surveys show that experts most appreciated the use of AR and robot for positive educational and developmental effects due to the children’s keen interests and in turn enhanced immersion into the dramatic activity. Consequently, the experts recommended that proper stories, scenes and technological realizations need to be selected carefully, in the light of children’s development, while lever aging on strengths of the technologies used.",
"title": ""
},
{
"docid": "26dc59c30371f1d0b2ff2e62a96f9b0f",
"text": "Hindi is very complex language with large number of phonemes and being used with various ascents in different regions in India. In this manuscript, speaker dependent and independent isolated Hindi word recognizers using the Hidden Markov Model (HMM) is implemented, under noisy environment. For this study, a set of 10 Hindi names has been chosen as a test set for which the training and testing is performed. The scheme instigated here implements the Mel Frequency Cepstral Coefficients (MFCC) in order to compute the acoustic features of the speech signal. Then, K-means algorithm is used for the codebook generation by performing clustering over the obtained feature space. Baum Welch algorithm is used for re-estimating the parameters, and finally for deciding the recognized Hindi word whose model likelihood is highest, Viterbi algorithm has been implemented; for the given HMM. This work resulted in successful recognition with 98. 6% recognition rate for speaker dependent recognition, for total of 10 speakers (6 male, 4 female) and 97. 5% for speaker independent isolated word recognizer for 10 speakers (male).",
"title": ""
},
{
"docid": "58702f835df43337692f855f35a9f903",
"text": "A dual-mode wide-band transformer based VCO is proposed. The two port impedance of the transformer based resonator is analyzed to derive the optimum primary to secondary capacitor load ratio, for robust mode selectivity and minimum power consumption. Fabricated in a 16nm FinFET technology, the design achieves 2.6× continuous tuning range spanning 7-to-18.3 GHz using a coil area of 120×150 μm2. The absence of lossy switches helps in maintaining phase noise of -112 to -100 dBc/Hz at 1 MHz offset, across the entire tuning range. The VCO consumes 3-4.4 mW and realizes power frequency tuning normalized figure of merit of 12.8 and 2.4 dB at 7 and 18.3 GHz respectively.",
"title": ""
},
{
"docid": "4d8c869c9d6e1d7ba38f56a124b84412",
"text": "We propose a novel reversible jump Markov chain Monte Carlo (MCMC) simulated an nealing algorithm to optimize radial basis function (RBF) networks. This algorithm enables us to maximize the joint posterior distribution of the network parameters and the number of basis functions. It performs a global search in the joint space of the pa rameters and number of parameters, thereby surmounting the problem of local minima. We also show that by calibrating a Bayesian model, we can obtain the classical AIC, BIC and MDL model selection criteria within a penalized likelihood framework. Finally, we show theoretically and empirically that the algorithm converges to the modes of the full posterior distribution in an efficient way.",
"title": ""
},
{
"docid": "ceb59133deb7828edaf602308cb3450a",
"text": "Abstract While there has been a great deal of interest in the modelling of non-linearities and regime shifts in economic time series, there is no clear consensus regarding the forecasting abilities of these models. In this paper we develop a general approach to predict multiple time series subject to Markovian shifts in the regime. The feasibility of the proposed forecasting techniques in empirical research is demonstrated and their forecast accuracy is evaluated.",
"title": ""
},
{
"docid": "55ffe87f74194ab3de60fea9d888d9ad",
"text": "A new priority queue implementation for the future event set problem is described in this article. The new implementation is shown experimentally to be O(1) in queue size for the priority increment distributions recently considered by Jones in his review article. It displays hold times three times shorter than splay trees for a queue size of 10,000 events. The new implementation, called a calendar queue, is a very simple structure of the multiple list variety using a novel solution to the overflow problem.",
"title": ""
}
] | scidocsrr |
94057608623a7644e71b477a75cdfeda | Exponentiated Gradient Exploration for Active Learning | [
{
"docid": "cce513c48e630ab3f072f334d00b67dc",
"text": "We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data. ] 1997 Academic Press",
"title": ""
}
] | [
{
"docid": "ac96b284847f58c7683df92e13157f40",
"text": "Falls are dangerous for the aged population as they can adversely affect health. Therefore, many fall detection systems have been developed. However, prevalent methods only use accelerometers to isolate falls from activities of daily living (ADL). This makes it difficult to distinguish real falls from certain fall-like activities such as sitting down quickly and jumping, resulting in many false positives. Body orientation is also used as a means of detecting falls, but it is not very useful when the ending position is not horizontal, e.g. falls happen on stairs. In this paper we present a novel fall detection system using both accelerometers and gyroscopes. We divide human activities into two categories: static postures and dynamic transitions. By using two tri-axial accelerometers at separate body locations, our system can recognize four kinds of static postures: standing, bending, sitting, and lying. Motions between these static postures are considered as dynamic transitions. Linear acceleration and angular velocity are measured to determine whether motion transitions are intentional. If the transition before a lying posture is not intentional, a fall event is detected. Our algorithm, coupled with accelerometers and gyroscopes, reduces both false positives and false negatives, while improving fall detection accuracy. In addition, our solution features low computational cost and real-time response.",
"title": ""
},
{
"docid": "6cbd51bbef3b56df6d97ec7b4348cd94",
"text": "This study reviews human clinical experience to date with several synthetic cannabinoids, including nabilone, levonantradol, ajulemic acid (CT3), dexanabinol (HU-211), HU-308, and SR141716 (Rimonabant®). Additionally, the concept of “clinical endogenous cannabinoid deficiency” is explored as a possible factor in migraine, idiopathic bowel disease, fibromyalgia and other clinical pain states. The concept of analgesic synergy of cannabinoids and opioids is addressed. A cannabinoid-mediated improvement in night vision at the retinal level is discussed, as well as its potential application to treatment of retinitis pigmentosa and other conditions. Additionally noted is the role of cannabinoid treatment in neuroprotection and its application to closed head injury, cerebrovascular accidents, and CNS degenerative diseases including Alzheimer, Huntington, Parkinson diseases and ALS. Excellent clinical results employing cannabis based medicine extracts (CBME) in spasticity and spasms of MS suggests extension of such treatment to other spasmodic and dystonic conditions. Finally, controversial areas of cannabinoid treatment in obstetrics, gynecology and pediatrics are addressed along with a rationale for such interventions. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-HAWORTH. E-mail address: <docdelivery@haworthpress. com> Website: <http://www.HaworthPress.com> 2003 by The Haworth Press, Inc. All rights reserved.]",
"title": ""
},
{
"docid": "56a7243414824a2e4ab3993dc3a90fbe",
"text": "The primary objectives of periodontal therapy are to maintain and to obtain health and integrity of the insertion apparatus and to re-establish esthetics by means of the quantitative and qualitative restoration of the gingival margin. Esthetics can be considered essential to the success of any dental procedure. However, in cleft lip and palate patients gingival esthetics do not play a relevant role, since most patients present little gingiva exposure (Mikami, 1990). The treatment protocol for cleft palate patients is complex and often requires a myriad of surgical and rehabilitative procedures that last until adulthood. In order to rehabilitate these patients and provide them with adequate physical and psychological conditions for a good quality of life, plastic surgery has been taking place since the 19th century, with the development of new techniques. By the age of six months the patients have undergone lip repair procedures (Bill, 1956; Jolleys, 1954), followed by palatoplasty at the age of 1218 months. As a consequence of these surgical interventions, the formation of innumerous scars and fibrous tissue in the anterior region may cause some sequels, such as orofacial growth alterations (Quarta and Koch, 1989; Ozawa, 2001), a shallow vestibule with lack of attached gingiva and gingival margin mobility (Falcone, 1966). A shallow vestibule in the cleft lip and palate patient is associated with the contraction of the upper lip during healing (Iino et al, 2001), which causes deleterious effects on growth, facial expression, speech, orthodontic and prosthetic treatment problems, diminished keratinized gingiva, bone graft resorption and changes in the upper lip muscle pattern. The surgical protocol at the Hospital for Rehabilitation of Craniofacial Anomalies (HRCA) in Bauru consists of carrying out primary surgeries (cheiloplasty and palatoplasty) during the first months of Periodontal Health Re-Establishment in Cleft Lip and Palate Patients through Vestibuloplasty Associated with Free Gingival Graft",
"title": ""
},
{
"docid": "ec58915a7fd321bcebc748a369153509",
"text": "For wireless charging of electric vehicle (EV) batteries, high-frequency magnetic fields are generated from magnetically coupled coils. The large air-gap between two coils may cause high leakage of magnetic fields and it may also lower the power transfer efficiency (PTE). For the first time, in this paper, we propose a new set of coil design formulas for high-efficiency and low harmonic currents and a new design procedure for low leakage of magnetic fields for high-power wireless power transfer (WPT) system. Based on the proposed design procedure, a pair of magnetically coupled coils with magnetic field shielding for a 1-kW-class golf-cart WPT system is optimized via finite-element simulation and the proposed design formulas. We built a 1-kW-class wireless EV charging system for practical measurements of the PTE, the magnetic field strength around the golf cart, and voltage/current spectrums. The fabricated system has achieved a PTE of 96% at the operating frequency of 20.15 kHz with a 156-mm air gap between the coils. At the same time, the highest magnetic field strength measured around the golf cart is 19.8 mG, which is far below the relevant electromagnetic field safety guidelines (ICNIRP 1998/2010). In addition, the third harmonic component of the measured magnetic field is 39 dB lower than the fundamental component. These practical measurement results prove the effectiveness of the proposed coil design formulas and procedure of a WPT system for high-efficiency and low magnetic field leakage.",
"title": ""
},
{
"docid": "a5001e03007f3fd166e15db37dcd3bc7",
"text": "Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models.",
"title": ""
},
{
"docid": "6300f94dbfa58583e15741e5c86aa372",
"text": "In this paper, we study the problem of retrieving a ranked list of top-N items to a target user in recommender systems. We first develop a novel preference model by distinguishing different rating patterns of users, and then apply it to existing collaborative filtering (CF) algorithms. Our preference model, which is inspired by a voting method, is well-suited for representing qualitative user preferences. In particular, it can be easily implemented with less than 100 lines of codes on top of existing CF algorithms such as user-based, item-based, and matrix-factorizationbased algorithms. When our preference model is combined to three kinds of CF algorithms, experimental results demonstrate that the preference model can improve the accuracy of all existing CF algorithms such as ATOP and NDCG@25 by 3%–24% and 6%–98%, respectively.",
"title": ""
},
{
"docid": "cd42f9eba7e1018f8a21c8830400af59",
"text": "This chapter proposes a conception of lexical meaning as use-potential, in contrast to prevailing atomistic and reificational views. The issues are illustrated on the example of spatial expressions, pre-eminently prepositions. It is argued that the dichotomy between polysemy and semantic generality is a false one, with expressions occupying points on a continuum from full homonymy to full monosemy, and with typical cases of polysemy falling in between. The notion of use-potential is explored in connectionist models of spatial categorization. Some possible objections to the use-potential approach are also addressed.",
"title": ""
},
{
"docid": "c91cb54598965e1111020ab70f9fbe94",
"text": "This paper proposes a parameter estimation method for doubly-fed induction generators (DFIGs) in variable-speed wind turbine systems (WTS). The proposed method employs an extended Kalman filter (EKF) for estimation of all electrical parameters of the DFIG, i.e., the stator and rotor resistances, the leakage inductances of stator and rotor, and the mutual inductance. The nonlinear state space model of the DFIG is derived and the design procedure of the EKF is described. The observability matrix of the linearized DFIG model is computed and the observability is checked online for different operation conditions. The estimation performance of the EKF is illustrated by simulation results. The estimated parameters are plotted against their actual values. The estimation performance of the EKF is also tested under variations of the DFIG parameters to investigate the estimation accuracy for changing parameters.",
"title": ""
},
{
"docid": "7f9b9bef62aed80a918ef78dcd15fb2a",
"text": "Transferring image-based object detectors to domain of videos remains a challenging problem. Previous efforts mostly exploit optical flow to propagate features across frames, aiming to achieve a good trade-off between performance and computational complexity. However, introducing an extra model to estimate optical flow would significantly increase the overall model size. The gap between optical flow and high-level features can hinder it from establishing the spatial correspondence accurately. Instead of relying on optical flow, this paper proposes a novel module called Progressive Sparse Local Attention (PSLA), which establishes the spatial correspondence between features across frames in a local region with progressive sparse strides and uses the correspondence to propagate features. Based on PSLA, Recursive Feature Updating (RFU) and Dense feature Transforming (DFT) are introduced to model temporal appearance and enrich feature representation respectively. Finally, a novel framework for video object detection is proposed. Experiments on ImageNet VID are conducted. Our framework achieves a state-of-the-art speedaccuracy trade-off with significantly reduced model capacity.",
"title": ""
},
{
"docid": "4dd0d34f6b67edee60f2e6fae5bd8dd9",
"text": "Virtual learning environments facilitate online learning, generating and storing large amounts of data during the learning/teaching process. This stored data enables extraction of valuable information using data mining. In this article, we present a systematic mapping, containing 42 papers, where data mining techniques are applied to predict students performance using Moodle data. Results show that decision trees are the most used classification approach. Furthermore, students interactions in forums are the main Moodle attribute analyzed by researchers.",
"title": ""
},
{
"docid": "67755a3dd06b09f458d1ee013e18c8ef",
"text": "Spiking neural networks are naturally asynchronous and use pulses to carry information. In this paper, we consider implementing such networks on a digital chip. We used an event-based simulator and we started from a previously established simulation, which emulates an analog spiking neural network, that can extract complex and overlapping, temporally correlated features. We modified this simulation to allow an easier integration in an embedded digital implementation. We first show that a four bits synaptic weight resolution is enough to achieve the best performance, although the network remains functional down to a 2 bits weight resolution. Then we show that a linear leak could be implemented to simplify the neurons leakage calculation. Finally, we demonstrate that an order-based STDP with a fixed number of potentiated synapses as low as 200 is efficient for features extraction. A simulation including these modifications, which lighten and increase the efficiency of digital spiking neural network implementation shows that the learning behavior is not affected, with a recognition rate of 98% in a cars trajectories detection application.",
"title": ""
},
{
"docid": "af98839cc3e28820c8d79403d58d903a",
"text": "Annotating the increasing amounts of user-contributed images in a personalized manner is in great demand. However, this demand is largely ignored by the mainstream of automated image annotation research. In this paper we aim for personalizing automated image annotation by jointly exploiting personalized tag statistics and content-based image annotation. We propose a cross-entropy based learning algorithm which personalizes a generic annotation model by learning from a user's multimedia tagging history. Using cross-entropy-minimization based Monte Carlo sampling, the proposed algorithm optimizes the personalization process in terms of a performance measurement which can be flexibly chosen. Automatic image annotation experiments with 5,315 realistic users in the social web show that the proposed method compares favorably to a generic image annotation method and a method using personalized tag statistics only. For 4,442 users the performance improves, where for 1,088 users the absolute performance gain is at least 0.05 in terms of average precision. The results show the value of the proposed method.",
"title": ""
},
{
"docid": "e4ce06c8e1dba5f9ec537dc137acf3ec",
"text": "Hemangiomas are relatively common benign proliferative lesion of vascular tissue origin. They are often present at birth and may become more apparent throughout life. They are seen on facial skin, tongue, lips, buccal mucosa and palate as well as muscles. Hemangiomas occur more common in females than males. This case report presents a case of capillary hemangioma in maxillary anterior region in a 10-year-old boy. How to cite this article: Satish V, Bhat M, Maganur PC, Shah P, Biradar V. Capillary Hemangioma in Maxillary Anterior Region: A Case Report. Int J Clin Pediatr Dent 2014;7(2):144-147.",
"title": ""
},
{
"docid": "a6f9dc745682efb871e338b63c0cbbc4",
"text": "Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.",
"title": ""
},
{
"docid": "ffa974993a412ddba571e65f8b87f7df",
"text": "Synthetic gene switches are basic building blocks for the construction of complex gene circuits that transform mammalian cells into useful cell-based machines for next-generation biotechnological and biomedical applications. Ligand-responsive gene switches are cellular sensors that are able to process specific signals to generate gene product responses. Their involvement in complex gene circuits results in sophisticated circuit topologies that are reminiscent of electronics and that are capable of providing engineered cells with the ability to memorize events, oscillate protein production, and perform complex information-processing tasks. Microencapsulated mammalian cells that are engineered with closed-loop gene networks can be implanted into mice to sense disease-related input signals and to process this information to produce a custom, fine-tuned therapeutic response that rebalances animal metabolism. Progress in gene circuit design, in combination with recent breakthroughs in genome engineering, may result in tailored engineered mammalian cells with great potential for future cell-based therapies.",
"title": ""
},
{
"docid": "cd977d0e24fd9e26e90f2cf449141842",
"text": "Several leadership and ethics scholars suggest that the transformational leadership process is predicated on a divergent set of ethical values compared to transactional leadership. Theoretical accounts declare that deontological ethics should be associated with transformational leadership while transactional leadership is likely related to teleological ethics. However, very little empirical research supports these claims. Furthermore, despite calls for increasing attention as to how leaders influence their followers’ perceptions of the importance of ethics and corporate social responsibility (CSR) for organizational effectiveness, no empirical study to date has assessed the comparative impact of transformational and transactional leadership styles on follower CSR attitudes. Data from 122 organizational leaders and 458 of their followers indicated that leader deontological ethical values (altruism, universal rights, Kantian principles, etc.) were strongly associated with follower ratings of transformational leadership, while leader teleological ethical values (utilitarianism) were related to follower ratings of transactional leadership. As predicted, only transformational leadership was associated with follower beliefs in the stakeholder view of CSR. Implications for the study and practice of ethical leadership, future research directions, and management education are discussed.",
"title": ""
},
{
"docid": "9078698db240725e1eb9d1f088fb05f4",
"text": "Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.",
"title": ""
},
{
"docid": "e541ae262655b7f5affefb32ce9267ee",
"text": "Internet of Things (IoT) is a revolutionary technology for the modern society. IoT can connect every surrounding objects for various applications like security, medical fields, monitoring and other industrial applications. This paper considers the application of IoT in the field of medicine. IoT in E-medicine can take the advantage of emerging technologies to provide immediate treatment to the patient as well as monitors and keeps track of health record for healthy person. IoT then performs complex computations on these collected data and can provide health related advice. Though IoT can provide a cost effective medical services to any people of all age groups, there are several key issues that need to be addressed. System security, IoT interoperability, dynamic storage facility and unified access mechanisms are some of the many fundamental issues associated with IoT. This paper proposes a system level design solution for security and flexibility aspect of IoT. In this paper, the functional components are bound in security function group which ensures the management of privacy and secure operation of the system. The security function group comprises of components which offers secure communication using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Since CP-ABE are delegated to unconstrained devices with the assumption that these devices are trusted, the producer encrypts data using AES and the ABE scheme is protected through symmetric key solutions.",
"title": ""
},
{
"docid": "eaa2ed7e15a3b0a3ada381a8149a8214",
"text": "This paper describes a new robust regular polygon detector. The regular polygon transform is posed as a mixture of regular polygons in a five dimensional space. Given the edge structure of an image, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a mixture of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. The remaining dimensions may be efficiently recovered subsequently using maximum likelihood at the locations of the most likely polygons in the subspace. This leads to an efficient algorithm. Also the a posteriori formulation facilitates inclusion of additional a priori information leading to real-time application to road sign detection. The use of gradient information also reduces noise compared to existing approaches such as the generalised Hough transform. Results are presented for images with noise to show stability. The detector is also applied to two separate applications: real-time road sign detection for on-line driver assistance; and feature detection, recovering stable features in rectilinear environments.",
"title": ""
},
{
"docid": "171e9eef8a23f5fdf05ba61a56415130",
"text": "Human moral judgment depends critically on “theory of mind,” the capacity to represent the mental states of agents. Recent studies suggest that the right TPJ (RTPJ) and, to lesser extent, the left TPJ (LTPJ), the precuneus (PC), and the medial pFC (MPFC) are robustly recruited when participants read explicit statements of an agent's beliefs and then judge the moral status of the agent's action. Real-world interactions, by contrast, often require social partners to infer each other's mental states. The current study uses fMRI to probe the role of these brain regions in supporting spontaneous mental state inference in the service of moral judgment. Participants read descriptions of a protagonist's action and then either (i) “moral” facts about the action's effect on another person or (ii) “nonmoral” facts about the situation. The RTPJ, PC, and MPFC were recruited selectively for moral over nonmoral facts, suggesting that processing moral stimuli elicits spontaneous mental state inference. In a second experiment, participants read the same scenarios, but explicit statements of belief preceded the facts: Protagonists believed their actions would cause harm or not. The response in the RTPJ, PC, and LTPJ was again higher for moral facts but also distinguished between neutral and negative outcomes. Together, the results illuminate two aspects of theory of mind in moral judgment: (1) spontaneous belief inference and (2) stimulus-driven belief integration.",
"title": ""
}
] | scidocsrr |
b3d61252436267694daa1f132f6726ca | Progress in Tourism Management Tourism supply chain management : A new research agenda | [
{
"docid": "5bd3cf8712d04b19226e53fca937e5a6",
"text": "This paper reviews the published studies on tourism demand modelling and forecasting since 2000. One of the key findings of this review is that the methods used in analysing and forecasting the demand for tourism have been more diverse than those identified by other review articles. In addition to the most popular time series and econometric models, a number of new techniques have emerged in the literature. However, as far as the forecasting accuracy is concerned, the study shows that there is no single model that consistently outperforms other models in all situations. Furthermore, this study identifies some new research directions, which include improving the forecasting accuracy through forecast combination; integrating both qualitative and quantitative forecasting approaches, tourism cycles and seasonality analysis, events’ impact assessment and risk forecasting.",
"title": ""
}
] | [
{
"docid": "1274ab286b1e3c5701ebb73adc77109f",
"text": "In this paper, we propose the first real time rumor debunking algorithm for Twitter. We use cues from 'wisdom of the crowds', that is, the aggregate 'common sense' and investigative journalism of Twitter users. We concentrate on identification of a rumor as an event that may comprise of one or more conflicting microblogs. We continue monitoring the rumor event and generate real time updates dynamically based on any additional information received. We show using real streaming data that it is possible, using our approach, to debunk rumors accurately and efficiently, often much faster than manual verification by professionals.",
"title": ""
},
{
"docid": "5d23af3f778a723b97690f8bf54dfa41",
"text": "Software engineering techniques have been employed for many years to create software products. The selections of appropriate software development methodologies for a given project, and tailoring the methodologies to a specific requirement have been a challenge since the establishment of software development as a discipline. In the late 1990’s, the general trend in software development techniques has changed from traditional waterfall approaches to more iterative incremental development approaches with different combination of old concepts, new concepts, and metamorphosed old concepts. Nowadays, the aim of most software companies is to produce software in short time period with minimal costs, and within unstable, changing environments that inspired the birth of Agile. Agile software development practice have caught the attention of software development teams and software engineering researchers worldwide during the last decade but scientific research and published outcomes still remains quite scarce. Every agile approach has its own development cycle that results in technological, managerial and environmental changes in the software companies. This paper explains the values and principles of ten agile practices that are becoming more and more dominant in the software development industry. Agile processes are not always beneficial, they have some limitations as well, and this paper also discusses the advantages and disadvantages of Agile processes.",
"title": ""
},
{
"docid": "21e235169d37658afee28d5f3f7c831b",
"text": "Two studies assessed the effects of a training procedure (Goal Management Training, GMT), derived from Duncan's theory of goal neglect, on disorganized behavior following TBI. In Study 1, patients with traumatic brain injury (TBI) were randomly assigned to brief trials of GMT or motor skills training. GMT, but not motor skills training, was associated with significant gains on everyday paper-and-pencil tasks designed to mimic tasks that are problematic for patients with goal neglect. In Study 2, GMT was applied in a postencephalitic patient seeking to improve her meal-preparation abilities. Both naturalistic observation and self-report measures revealed improved meal preparation performance following GMT. These studies provide both experimental and clinical support for the efficacy of GMT toward the treatment of executive functioning deficits that compromise independence in patients with brain damage.",
"title": ""
},
{
"docid": "3f1ab17fb722d5a2612675673b200a82",
"text": "In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observables. Our focus here is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Experiments on real-world stock price datasets demonstrate that the proposed model generates a better volatility estimation and prediction that outperforms mainstream methods, e.g., deterministic models such as GARCH and its variants, and stochastic models namely the MCMC-based model stochvol as well as the Gaussian process volatility model GPVol, on average negative log-likelihood.",
"title": ""
},
{
"docid": "8b1bd5243d4512324e451a780c1ec7d3",
"text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this fundamentals of computer security by reading this site. We offer you the best product, always and always.",
"title": ""
},
{
"docid": "ed63ebf895f1f37ba9b788c36b8e6cfc",
"text": "Melanocyte stem cells (McSCs) and mouse models of hair graying serve as useful systems to uncover mechanisms involved in stem cell self-renewal and the maintenance of regenerating tissues. Interested in assessing genetic variants that influence McSC maintenance, we found previously that heterozygosity for the melanogenesis associated transcription factor, Mitf, exacerbates McSC differentiation and hair graying in mice that are predisposed for this phenotype. Based on transcriptome and molecular analyses of Mitfmi-vga9/+ mice, we report a novel role for MITF in the regulation of systemic innate immune gene expression. We also demonstrate that the viral mimic poly(I:C) is sufficient to expose genetic susceptibility to hair graying. These observations point to a critical suppressor of innate immunity, the consequences of innate immune dysregulation on pigmentation, both of which may have implications in the autoimmune, depigmenting disease, vitiligo.",
"title": ""
},
{
"docid": "cee3833160aa1cc513e96d49b72eeea9",
"text": "Spatial filtering (SF) constitutes an integral part of building EEG-based brain-computer interfaces (BCIs). Algorithms frequently used for SF, such as common spatial patterns (CSPs) and independent component analysis, require labeled training data for identifying filters that provide information on a subject's intention, which renders these algorithms susceptible to overfitting on artifactual EEG components. In this study, beamforming is employed to construct spatial filters that extract EEG sources originating within predefined regions of interest within the brain. In this way, neurophysiological knowledge on which brain regions are relevant for a certain experimental paradigm can be utilized to construct unsupervised spatial filters that are robust against artifactual EEG components. Beamforming is experimentally compared with CSP and Laplacian spatial filtering (LP) in a two-class motor-imagery paradigm. It is demonstrated that beamforming outperforms CSP and LP on noisy datasets, while CSP and beamforming perform almost equally well on datasets with few artifactual trials. It is concluded that beamforming constitutes an alternative method for SF that might be particularly useful for BCIs used in clinical settings, i.e., in an environment where artifact-free datasets are difficult to obtain.",
"title": ""
},
{
"docid": "4af5b29ebda47240d51cd5e7765d990f",
"text": "In this paper, a Rectangular Waveguide (RW) to microstrip transition with Low-Temperature Co-fired Ceramic (LTCC) technology in Ka-band is designed, fabricated and measured. Compared to the traditional transition using a rectangular slot, the proposed Stepped-Impedance Resonator (SIR) slot enlarges the bandwidth of the transition. By introducing an additional design parameter, it generates multi-modes within the transition. To further improve the bandwidth and to adjust the performance of the transition, a resonant strip is embedded between the open microstrip line and its ground plane. Measured results agree well with that of the simulation, showing an effective bandwidth about 22% (from 28.5 GHz to 36.5GHz), an insertion loss approximately 3 dB and return loss better than 15 dB in the pass-band.",
"title": ""
},
{
"docid": "b7eb2c65c459c9d5776c1e2cba84706c",
"text": "Observers, searching for targets among distractor items, guide attention with a mix of top-down information--based on observers' knowledge--and bottom-up information--stimulus-based and largely independent of that knowledge. There are 2 types of top-down guidance: explicit information (e.g., verbal description) and implicit priming by preceding targets (top-down because it implies knowledge of previous searches). Experiments 1 and 2 separate bottom-up and top-down contributions to singleton search. Experiment 3 shows that priming effects are based more strongly on target than on distractor identity. Experiments 4 and 5 show that more difficult search for one type of target (color) can impair search for other types (size, orientation). Experiment 6 shows that priming guides attention and does not just modulate response.",
"title": ""
},
{
"docid": "44480b69d1f49703db82977d1e248946",
"text": "Civic crowdfunding is a sub-type of crowdfunding whereby citizens contribute to funding community-based projects ranging from physical structures to amenities. Though civic crowdfunding has great potential for impact, it remains a developing field in terms of project success and widespread adoption. To explore how technology shapes interactions and outcomes within civic projects, our research addresses two interrelated questions: how do offline communities engage online across civic crowdfunding projects, and, what purpose does this activity serve both projects and communities? These questions are explored through discussion of types of offline communities and description of online activity across civic crowdfunding projects. We conclude by considering the implications of this knowledge for civic crowdfunding and its continued research.",
"title": ""
},
{
"docid": "5efd5fb9caaeadb90a684d32491f0fec",
"text": "The ModelNiew/Controller design pattern is very useful for architecting interactive software systems. This design pattern is partition-independent, because it is expressed in terms of an interactive application running in a single address space. Applying the ModelNiew/Controller design pattern to web-applications is therefore complicated by the fact that current technologies encourage developers to partition the application as early as in the design phase. Subsequent changes to that partitioning require considerable changes to the application's implementation despite the fact that the application logic has not changed. This paper introduces the concept of Flexible Web-Application Partitioning, a programming model and implementation infrastructure, that allows developers to apply the ModeWViewKontroller design pattern in a partition-independent manner: Applications are developed and tested in a single address-space; they can then be deployed to various clientherver architectures without changing the application's source code. In addition, partitioning decisions can be changed without modifying the application.",
"title": ""
},
{
"docid": "a9372375af0500609b7721120181c280",
"text": "Copyright © 2014 Alicia Garcia-Falgueras. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In accordance of the Creative Commons Attribution License all Copyrights © 2014 are reserved for SCIRP and the owner of the intellectual property Alicia Garcia-Falgueras. All Copyright © 2014 are guarded by law and by SCIRP as a guardian.",
"title": ""
},
{
"docid": "b856ab3760ff0f762fda12cc852903da",
"text": "This paper presents a detection method of small-foreign-metal particles using a 400 kHz SiC-MOSFETs high-frequency inverter. A 400 kHz SiC-MOSFETs high-frequency inverter is developed and applied to the small-foreign-metal particles detection on high-performance chemical films (HPCFs). HPCFs are manufactured with continuous production lines in industries. A new arrangement of IH coils are proposed, which is applicable for the practical production-lines of HPCFs. A prototype experimental model is constructed and tested. Experimental results demonstrate that the newly proposed IH coils with the constructed 400 kHz SiC-MOSFETs can heat small-foreign-metal particles and the heated small-foreign-metal particles can be detected by a thermographic camera. Experimental results with a new arrangement of IH coils also demonstrate that the proposed detection method of small-foreign-metal particles using 400 kHz SiC-MOSFETs high-frequency inverter can be applicable for the practical production lines of HPCFs.",
"title": ""
},
{
"docid": "8f4b873cab626dbf0ebfc79397086545",
"text": "R emote-sensing techniques have transformed ecological research by providing both spatial and temporal perspectives on ecological phenomena that would otherwise be difficult to study (eg Kerr and Ostrovsky 2003; Running et al. 2004; Vierling et al. 2008). In particular, a strong focus has been placed on the use of data obtained from space-borne remote-sensing instruments because these provide regional-to global-scale observations and repeat time-series sampling of ecological indicators (eg Gould 2000). The main limitation of most of the research-focused satellite missions is the mismatch between the pixel resolution of many regional-extent sensors (eg Landsat [spatial resolution of ~30 m] to the Moderate Resolution Imaging Spectro-radiometer [spatial resolution of ~1 km]), the revisit period (eg 18 days for Landsat), and the scale of many ecological processes. Indeed, data provided by these platforms are often \" too general to meet regional or local objectives \" in ecology (Wulder et al. 2004). To address this limitation, a range of new (largely commercially operated) satellite sensors have become operational over the past decade, offering data at finer than 10-m spatial resolution with more responsive capabilities (eg Quickbird, IKONOS, GeoEye-1, OrbView-3, WorldView-2). Such data are useful for ecological studies (Fretwell et al. 2012), but there remain three operational constraints: (1) a high cost per scene; (2) suitable repeat times are often only possible if oblique view angles are used, distorting geometric and radiometric pixel properties; and (3) cloud contamination, which can obscure features of interest (Loarie et al. 2007). Imaging sensors on board civilian aircraft platforms may also be used; these can provide more scale-appropriate data for fine-scale ecological studies, including data from light detection and ranging (LiDAR) sensors (Vierling et al. 2008). In theory, these surveys can be made on demand, but in practice data acquisition is costly, meaning that regular time-series monitoring is operationally constrained. A new method for fine-scale remote sensing is now emerging that could address all of these operational issues and thus potentially revolutionize spatial ecology and environmental science. Unmanned aerial vehicles (UAVs) are lightweight, low-cost aircraft platforms operated from the ground that can carry imaging or non-imaging payloads. UAVs offer ecologists a promising route to responsive, timely, and cost-effective monitoring of environmental phenomena at spatial and temporal resolutions that are appropriate to the scales of many ecologically relevant variables. Emerging from a military background, there are now a growing number of civilian agencies and organizations that have recognized the …",
"title": ""
},
{
"docid": "72d75ebfc728d3b287bcaf429a6b2ee5",
"text": "We present a fully integrated 7nm CMOS platform featuring a 3rd generation finFET architecture, SAQP for fin formation, and SADP for BEOL metallization. This technology reflects an improvement of 2.8X routed logic density and >40% performance over the 14nm reference technology described in [1-3]. A full range of Vts is enabled on-chip through a unique multi-workfunction process. This enables both excellent low voltage SRAM response and highly scaled memory area simultaneously. The HD 6-T bitcell size is 0.0269um2. This 7nm technology is fully enabled by immersion lithography and advanced optical patterning techniques (like SAQP and SADP). However, the technology platform is also designed to leverage EUV insertion for specific multi-patterned (MP) levels for cycle time benefit and manufacturing efficiency. A complete set of foundation and complex IP is available in this advanced CMOS platform to enable both High Performance Compute (HPC) and mobile applications.",
"title": ""
},
{
"docid": "caf5b727bfc59efc9f60697321796920",
"text": "As humans start to spend more time in collaborative virtual environments (CVEs) it becomes important to study their interactions in such environments. One aspect of such interactions is personal space. To begin to address this, we have conducted empirical investigations in a non immersive virtual environment: an experiment to investigate the influence on personal space of avatar gender, and an observational study to further explore the existence of personal space. Experimental results give some evidence to suggest that avatar gender has an influence on personal space although the participants did not register high personal space invasion anxiety, contrary to what one might expect from personal space invasion in the physical world. The observational study suggests that personal space does exist in CVEs, as the users tend to maintain, in a similar way to the physical world, a distance when they are interacting with each other. Our studies provide an improved understanding of personal space in CVEs and the results can be used to further enhance the usability of these environments.",
"title": ""
},
{
"docid": "2b97e03fa089cdee0bf504dd85e5e4bb",
"text": "One of the most severe threats to revenue and quality of service in telecom providers is fraud. The advent of new technologies has provided fraudsters new techniques to commit fraud. SIM box fraud is one of such fraud that has emerged with the use of VOIP technologies. In this work, a total of nine features found to be useful in identifying SIM box fraud subscriber are derived from the attributes of the Customer Database Record (CDR). Artificial Neural Networks (ANN) has shown promising solutions in classification problems due to their generalization capabilities. Therefore, supervised learning method was applied using Multi layer perceptron (MLP) as a classifier. Dataset obtained from real mobile communication company was used for the experiments. ANN had shown classification accuracy of 98.71 %.",
"title": ""
},
{
"docid": "54b4726650b3afcddafb120ff99c9951",
"text": "Online harassment has been a problem to a greater or lesser extent since the early days of the internet. Previous work has applied anti-spam techniques like machine-learning based text classification (Reynolds, 2011) to detecting harassing messages. However, existing public datasets are limited in size, with labels of varying quality. The #HackHarassment initiative (an alliance of 1 tech companies and NGOs devoted to fighting bullying on the internet) has begun to address this issue by creating a new dataset superior to its predecssors in terms of both size and quality. As we (#HackHarassment) complete further rounds of labelling, later iterations of this dataset will increase the available samples by at least an order of magnitude, enabling corresponding improvements in the quality of machine learning models for harassment detection. In this paper, we introduce the first models built on the #HackHarassment dataset v1.0 (a new open dataset, which we are delighted to share with any interested researcherss) as a benchmark for future research.",
"title": ""
},
{
"docid": "4418a2cfd7216ecdd277bde2d7799e4d",
"text": "Most of legacy systems use nowadays were modeled and documented using structured approach. Expansion of these systems in terms of functionality and maintainability requires shift towards object-oriented documentation and design, which has been widely accepted by the industry. In this paper, we present a survey of the existing Data Flow Diagram (DFD) to Unified Modeling language (UML) transformation techniques. We analyze transformation techniques using a set of parameters, identified in the survey. Based on identified parameters, we present an analysis matrix, which describes the strengths and weaknesses of transformation techniques. It is observed that most of the transformation approaches are rule based, which are incomplete and defined at abstract level that does not cover in depth transformation and automation issues. Transformation approaches are data centric, which focuses on datastore for class diagram generation. Very few of the transformation techniques have been applied on case study as a proof of concept, which are not comprehensive and majority of them are partially automated. Keywords-Unified Modeling Language (UML); Data Flow Diagram (DFD); Class Diagram; Model Transformation.",
"title": ""
},
{
"docid": "6ae289d7da3e923c1288f39fd7a162f6",
"text": "The usage of digital evidence from electronic devices has been rapidly expanding within litigation, and along with this increased usage, the reliance upon forensic computer examiners to acquire, analyze, and report upon this evidence is also rapidly growing. This growing demand for forensic computer examiners raises questions concerning the selection of individuals qualified to perform this work. While courts have mechanisms for qualifying witnesses that provide testimony based on scientific data, such as digital data, the qualifying criteria covers a wide variety of characteristics including, education, experience, training, professional certifications, or other special skills. In this study, we compare task performance responses from forensic computer examiners with an expert review panel and measure the relationship with the characteristics of the examiners to their quality responses. The results of this analysis provide insight into identifying forensic computer examiners that provide high-quality responses.",
"title": ""
}
] | scidocsrr |
dda7e725f5664f85045c296ce3436776 | Real-time liquid biopsy in cancer patients: fact or fiction? | [
{
"docid": "a4d18ca808d30a25d7f974a8d9093124",
"text": "Metastases, rather than primary tumours, are responsible for most cancer deaths. To prevent these deaths, improved ways to treat metastatic disease are needed. Blood flow and other mechanical factors influence the delivery of cancer cells to specific organs, whereas molecular interactions between the cancer cells and the new organ influence the probability that the cells will grow there. Inhibition of the growth of metastases in secondary sites offers a promising approach for cancer therapy.",
"title": ""
}
] | [
{
"docid": "20cb30a452bf20c9283314decfb7eb6e",
"text": "In this paper, we apply bidirectional training to a long short term memory (LSTM) network for the first time. We also present a modified, full gradient version of the LSTM learning algorithm. We discuss the significance of framewise phoneme classification to continuous speech recognition, and the validity of using bidirectional networks for online causal tasks. On the TIMIT speech database, we measure the framewise phoneme classification scores of bidirectional and unidirectional variants of both LSTM and conventional recurrent neural networks (RNNs). We find that bidirectional LSTM outperforms both RNNs and unidirectional LSTM.",
"title": ""
},
{
"docid": "10a213a6bbf6269eb7f3d0dae8601b9a",
"text": "Behaviour trees provide the possibility of improving on existing Artificial Intelligence techniques in games by being simple to implement, scalable, able to handle the complexity of games, and modular to improve reusability. This ultimately improves the development process for designing automated game players. We cover here the use of behaviour trees to design and develop an AI-controlled player for the commercial real-time strategy game DEFCON. In particular, we evolved behaviour trees to develop a competitive player which was able to outperform the game’s original AI-bot more than 50% of the time. We aim to highlight the potential for evolving behaviour trees as a practical approach to developing AI-bots in games.",
"title": ""
},
{
"docid": "242e78ed606d13502ace6d5eae00b315",
"text": "Use of information technology management framework plays a major influence on organizational success. This article focuses on the field of Internet of Things (IoT) management. In this study, a number of risks in the field of IoT is investigated, then with review of a number of COBIT5 risk management schemes, some associated strategies, objectives and roles are provided. According to the in-depth studies of this area it is expected that using the best practices of COBIT5 can be very effective, while the use of this standard considerably improve some criteria such as performance, cost and time. Finally, the paper proposes a framework which reflects the best practices and achievements in the field of IoT risk management.",
"title": ""
},
{
"docid": "d399e142488766759abf607defd848f0",
"text": "The high penetration of cell phones in today's global environment offers a wide range of promising mobile marketing activities, including mobile viral marketing campaigns. However, the success of these campaigns, which remains unexplored, depends on the consumers' willingness to actively forward the advertisements that they receive to acquaintances, e.g., to make mobile referrals. Therefore, it is important to identify and understand the factors that influence consumer referral behavior via mobile devices. The authors analyze a three-stage model of consumer referral behavior via mobile devices in a field study of a firm-created mobile viral marketing campaign. The findings suggest that consumers who place high importance on the purposive value and entertainment value of a message are likely to enter the interest and referral stages. Accounting for consumers' egocentric social networks, we find that tie strength has a negative influence on the reading and decision to refer stages and that degree centrality has no influence on the decision-making process. © 2013 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "5a9d8c0531a06b5542e8f02b2673b26d",
"text": "Given that e-tailing service failure is inevitable, a better understanding of how service failure and recovery affect customer loyalty represents an important topic for academics and practitioners. This study explores the relationship of service failure severity, service recovery justice (i.e., interactional justice, procedural justice, and distributive justice), and perceived switching costs with customer loyalty; as well, the moderating relationship of service recovery justice and perceived switching costs on the link between service failure severity and customer loyalty in the context of e-tailing are investigated. Data collected from 221 erceived switching costs ustomer loyalty useful respondents are tested against the research model using the partial least squares (PLS) approach. The results indicate that service failure severity, interactional justice, procedural justice and perceived switching costs have a significant relationship with customer loyalty, and that interactional justice can mitigate the negative relationship between service failure severity and customer loyalty. These findings provide several important theoretical and practical implications in terms of e-tailing service failure and",
"title": ""
},
{
"docid": "5b0d5ebe7666334b09a1136c1cb2d8e4",
"text": "In this paper, lesion areas affected by anthracnose are segmented using segmentation techniques, graded based on percentage of affected area and neural network classifier is used to classify normal and anthracnose affected on fruits. We have considered three types of fruit namely mango, grape and pomegranate for our work. The developed processing scheme consists of two phases. In the first phase, segmentation techniques namely thresholding, region growing, K-means clustering and watershed are employed for separating anthracnose affected lesion areas from normal area. Then these affected areas are graded by calculating the percentage of affected area. In the second phase texture features are extracted using Runlength Matrix. These features are then used for classification purpose using ANN classifier. We have conducted experimentation on a dataset of 600 fruits’ image samples. The classification accuracies for normal and affected anthracnose fruit types are 84.65% and 76.6% respectively. The work finds application in developing a machine vision system in horticulture field.",
"title": ""
},
{
"docid": "ae991359d6e76d0038de5a65f8218732",
"text": "Spatial data mining is the process of discovering interesting and previously unknown, but potentially useful patterns from the spatial and spatiotemporal data. However, explosive growth in the spatial and spatiotemporal data, and the emergence of social media and location sensing technologies emphasize the need for developing new and computationally efficient methods tailored for analyzing big data. In this paper, we review major spatial data mining algorithms by closely looking at the computational and I/O requirements and allude to few applications dealing with big spatial data.",
"title": ""
},
{
"docid": "5f3dc141b69eb50e17bdab68a2195e13",
"text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.",
"title": ""
},
{
"docid": "ef3b9dd6b463940bc57cdf7605c24b1e",
"text": "With the rapid development of cloud storage, data security in storage receives great attention and becomes the top concern to block the spread development of cloud service. In this paper, we systematically study the security researches in the storage systems. We first present the design criteria that are used to evaluate a secure storage system and summarize the widely adopted key technologies. Then, we further investigate the security research in cloud storage and conclude the new challenges in the cloud environment. Finally, we give a detailed comparison among the selected secure storage systems and draw the relationship between the key technologies and the design criteria.",
"title": ""
},
{
"docid": "88ae7446c9a63086bda9109a696459bd",
"text": "OBJECTIVES\nTo perform a systematic review of neurologic involvement in Systemic sclerosis (SSc) and Localized Scleroderma (LS), describing clinical features, neuroimaging, and treatment.\n\n\nMETHODS\nWe performed a literature search in PubMed using the following MeSH terms, scleroderma, systemic sclerosis, localized scleroderma, localized scleroderma \"en coup de sabre\", Parry-Romberg syndrome, cognitive impairment, memory, seizures, epilepsy, headache, depression, anxiety, mood disorders, Center for Epidemiologic Studies Depression (CES-D), SF-36, Beck Depression Inventory (BDI), Beck Anxiety Inventory (BAI), Patient Health Questionnaire-9 (PHQ-9), neuropsychiatric, psychosis, neurologic involvement, neuropathy, peripheral nerves, cranial nerves, carpal tunnel syndrome, ulnar entrapment, tarsal tunnel syndrome, mononeuropathy, polyneuropathy, radiculopathy, myelopathy, autonomic nervous system, nervous system, electroencephalography (EEG), electromyography (EMG), magnetic resonance imaging (MRI), and magnetic resonance angiography (MRA). Patients with other connective tissue disease knowingly responsible for nervous system involvement were excluded from the analyses.\n\n\nRESULTS\nA total of 182 case reports/studies addressing SSc and 50 referring to LS were identified. SSc patients totalized 9506, while data on 224 LS patients were available. In LS, seizures (41.58%) and headache (18.81%) predominated. Nonetheless, descriptions of varied cranial nerve involvement and hemiparesis were made. Central nervous system involvement in SSc was characterized by headache (23.73%), seizures (13.56%) and cognitive impairment (8.47%). Depression and anxiety were frequently observed (73.15% and 23.95%, respectively). Myopathy (51.8%), trigeminal neuropathy (16.52%), peripheral sensorimotor polyneuropathy (14.25%), and carpal tunnel syndrome (6.56%) were the most frequent peripheral nervous system involvement in SSc. Autonomic neuropathy involving cardiovascular and gastrointestinal systems was regularly described. Treatment of nervous system involvement, on the other hand, varied in a case-to-case basis. However, corticosteroids and cyclophosphamide were usually prescribed in severe cases.\n\n\nCONCLUSIONS\nPreviously considered a rare event, nervous system involvement in scleroderma has been increasingly recognized. Seizures and headache are the most reported features in LS en coup de sabre, while peripheral and autonomic nervous systems involvement predominate in SSc. Moreover, recently, reports have frequently documented white matter lesions in asymptomatic SSc patients, suggesting smaller branches and perforating arteries involvement.",
"title": ""
},
{
"docid": "cfe09d26531229bd54a8009b67e9bfd7",
"text": "Rail transportation plays a critical role to safely and efficiently transport hazardous materials. A number of strategies have been implemented or are being developed to reduce the risk of hazardous materials release from train accidents. Each of these risk reduction strategies has its safety benefit and corresponding implementation cost. However, the cost effectiveness of the integration of different risk reduction strategies is not well understood. Meanwhile, there has been growing interest in the U.S. rail industry and government to best allocate resources for improving hazardous materials transportation safety. This paper presents an optimization model that considers the combination of two types of risk reduction strategies, broken rail prevention and tank car safety design enhancement. A Pareto-optimality technique is used to maximize risk reduction at a given level of investment. The framework presented in this paper can be adapted to address a broader set of risk reduction strategies and is intended to assist decision makers for local, regional and system-wide risk management of rail hazardous materials transportation.",
"title": ""
},
{
"docid": "e4c2fcc09b86dc9509a8763e7293cfe9",
"text": "This paperinvestigatesthe useof particle (sub-word) -grams for languagemodelling. One linguistics-basedand two datadriven algorithmsare presentedand evaluatedin termsof perplexity for RussianandEnglish. Interpolatingword trigramand particle6-grammodelsgivesup to a 7.5%perplexity reduction over thebaselinewordtrigrammodelfor Russian.Latticerescor ing experimentsarealsoperformedon1997DARPA Hub4evaluationlatticeswheretheinterpolatedmodelgivesa 0.4%absolute reductionin worderrorrateoverthebaselinewordtrigrammodel.",
"title": ""
},
{
"docid": "1f18623625304f7c47ca144c8acf4bc9",
"text": "Deep neural networks (DNNs) are known to be vulnerable to adversarial perturbations, which imposes a serious threat to DNN-based decision systems. In this paper, we propose to apply the lossy Saak transform to adversarially perturbed images as a preprocessing tool to defend against adversarial attacks. Saak transform is a recently-proposed state-of-the-art for computing the spatial-spectral representations of input images. Empirically, we observe that outputs of the Saak transform are very discriminative in differentiating adversarial examples from clean ones. Therefore, we propose a Saak transform based preprocessing method with three steps: 1) transforming an input image to a joint spatial-spectral representation via the forward Saak transform, 2) apply filtering to its high-frequency components, and, 3) reconstructing the image via the inverse Saak transform. The processed image is found to be robust against adversarial perturbations. We conduct extensive experiments to investigate various settings of the Saak transform and filtering functions. Without harming the decision performance on clean images, our method outperforms state-of-the-art adversarial defense methods by a substantial margin on both the CIFAR10 and ImageNet datasets. Importantly, our results suggest that adversarial perturbations can be effectively and efficiently defended using state-of-the-art frequency analysis.",
"title": ""
},
{
"docid": "7819d359e169ae18f9bb50f464e1233c",
"text": "As large amount of data is generated in medical organizations (hospitals, medical centers) but this data is not properly used. There is a wealth of hidden information present in the datasets. The healthcare environment is still “information rich” but “knowledge poor”. There is a lack of effective analysis tools to discover hidden relationships and trends in data. Advanced data mining techniques can help remedy this situation. For this purpose we can use different data mining techniques. This research paper intends to provide a survey of current techniques of knowledge discovery in databases using data mining techniques that are in use in today’s medical research particularly in Heart Disease Prediction. This research has developed a prototype Heart Disease Prediction System (HDPS) using data mining techniques namely, Decision Trees, Naïve Bayes and Neural Network. This Heart disease prediction system can answer complex “what if” queries which traditional decision support systems cannot. Using medical profiles such as age, sex, blood pressure and blood sugar it can predict the likelihood of patients getting a heart disease. It enables significant knowledge, e.g. patterns, relationships between medical factors related to heart disease, to be established.",
"title": ""
},
{
"docid": "02c41de2c0447eec4c5198bccdb1d414",
"text": "The paper contends that the use of cost-benefit analysis (CBA) for or against capital punishment is problematic insofar as CBA (1) commodifies and thus reduces the value of human life and (2) cannot quantify all costs and benefits. The paramount theories of punishment, retribution and utilitarianism, which are used as rationales for capital punishment, do not justify the use of cost-benefit analysis as part of that rationale. Calling on the theory of restorative justice, the paper recommends a change in the linguistic register used to describe the value of human beings. In particular, abolitionists should emphasize that human beings have essential value. INTRODUCTION Advocates of the death penalty use economics to justify the use of capital punishment. Scott Turow, an Illinois-based lawyer says it well when he comments that two arguments frequently used by death penalty advocates are that, “the death penalty is a deterrent to others and it is more cost effective than keeping an individual in jail for life” (Turow). Edward Elijas takes the point further in writing the following, “Let’s imagine for a moment there was no death penalty. The only reasonable sentence would a life sentence. This would be costly to the tax payers, not only for the cost of housing and feeding the prisoner but because of the numerous appeals which wastes man hours and money. By treating criminals in this manner, we are encouraging behavior that will result in a prison sentence. If there is no threat of death to one who commits a murder, than that person is guaranteed to be provided with a decent living environment until their next parole hearing. They are definitely not getting the punishment they deserve” (http://www.cwrl.utexas.edu/). According to the argument, whether a person convicted",
"title": ""
},
{
"docid": "dad0c9ce47334ca6133392322068dd68",
"text": "A monolithic 64Gb MLC NAND flash based on 21nm process technology has been developed for the first time. The device consists of 4-plane arrays and provides page size of up to 32KB. It also features a newly developed DDR interface that can support up to the maximum bandwidth of 400MB/s. To address performance and reliability, on-chip randomizer, soft data readout, and incremental bit line precharge scheme have been developed.",
"title": ""
},
{
"docid": "21b4f160b73d7dbe934f7a716c667aef",
"text": "The rapid growth of silicon densities has made it feasible to deploy reconfigurable hardware as a highly parallel computing platform. However, in most cases, the application needs to be programmed in hardware description or assembly languages, whereas most application programmers are familiar with the algorithmic programming paradigm. SA-C has been proposed as an expression-oriented language designed to implicitly express data parallel operations. Morphosys is a reconfigurable system-on-chip architecture that supports a data-parallel, SIMD computational model. This paper describes a compiler framework to analyze SA-C programs, perform optimizations, and map the application onto the Morphosys architecture. The mapping process involves operation scheduling, resource allocation and binding and register allocation in the context of the Morphosys architecture. The execution times of some compiled image-processing kernels can achieve up to 42x speed-up over an 800 MHz Pentium III machine.",
"title": ""
},
{
"docid": "b0ea0b7e3900b440cb4e1d5162c6830b",
"text": "Product Lifecycle Management (PLM) solutions have been serving as the basis for collaborative product definition, manufacturing, and service management in many industries. They capture and provide access to product and process information and preserve integrity of information throughout the lifecycle of a product. Efficient growth in the role of Building Information Modeling (BIM) can benefit vastly from unifying solutions to acquire, manage and make use of information and processes from various project and enterprise level systems, selectively adapting functionality from PLM systems. However, there are important differences between PLM’s target industries and the Architecture, Engineering, and Construction (AEC) industry characteristics that require modification and tailoring of some aspects of current PLM technology. In this study we examine the fundamental PLM functionalities that create synergy with the BIM-enabled AEC industry. We propose a conceptual model for the information flow and integration between BIM and PLM systems. Finally, we explore the differences between the AEC industry and traditional scope of service for PLM solutions.",
"title": ""
},
{
"docid": "509fe613e25c9633df2520e4c3a62b74",
"text": "This study, in an attempt to rise above the intricacy of 'being informed on the verge of globalization,' is founded on the premise that Machine Translation (MT) applications searching for an ideal key to find a universal foundation for all natural languages have a restricted say over the translation process at various discourse levels. Our paper favors not judging against the superiority of human translation vs. machine translation or automated translation in non-English speaking settings, but rather referring to the inadequacies and adequacies of MT at certain pragmatic levels, lacking the right sense and dynamic equivalence, but producing syntactically well-formed or meaning-extractable outputs in restricted settings. Reasoning in this way, the present study supports MT before, during, and after translation. It aims at making translators understand that they could cooperate with the software to obtain a synergistic effect. In other words, they could have a say and have an essential part to play in a semi-automated translation process (Rodrigo, 2001). In this respect, semi-automated translation or MT courses should be included in the curricula of translation departments worldwide to keep track of the state of the art as well as make potential translators aware of future trends.",
"title": ""
},
{
"docid": "be3204a5a4430cc3150bf0368a972e38",
"text": "Deep learning has exploded in the public consciousness, primarily as predictive and analytical products suffuse our world, in the form of numerous human-centered smart-world systems, including targeted advertisements, natural language assistants and interpreters, and prototype self-driving vehicle systems. Yet to most, the underlying mechanisms that enable such human-centered smart products remain obscure. In contrast, researchers across disciplines have been incorporating deep learning into their research to solve problems that could not have been approached before. In this paper, we seek to provide a thorough investigation of deep learning in its applications and mechanisms. Specifically, as a categorical collection of state of the art in deep learning research, we hope to provide a broad reference for those seeking a primer on deep learning and its various implementations, platforms, algorithms, and uses in a variety of smart-world systems. Furthermore, we hope to outline recent key advancements in the technology, and provide insight into areas, in which deep learning can improve investigation, as well as highlight new areas of research that have yet to see the application of deep learning, but could nonetheless benefit immensely. We hope this survey provides a valuable reference for new deep learning practitioners, as well as those seeking to innovate in the application of deep learning.",
"title": ""
}
] | scidocsrr |
0c00ccb5f363f28347e55517cfb78f95 | A Measure of Similarity of Time Series Containing Missing Data Using the Mahalanobis Distance | [
{
"docid": "d4f1cdfe13fda841edfb31ced34a4ee8",
"text": "ÐMissing data are often encountered in data sets used to construct effort prediction models. Thus far, the common practice has been to ignore observations with missing data. This may result in biased prediction models. In this paper, we evaluate four missing data techniques (MDTs) in the context of software cost modeling: listwise deletion (LD), mean imputation (MI), similar response pattern imputation (SRPI), and full information maximum likelihood (FIML). We apply the MDTs to an ERP data set, and thereafter construct regression-based prediction models using the resulting data sets. The evaluation suggests that only FIML is appropriate when the data are not missing completely at random (MCAR). Unlike FIML, prediction models constructed on LD, MI and SRPI data sets will be biased unless the data are MCAR. Furthermore, compared to LD, MI and SRPI seem appropriate only if the resulting LD data set is too small to enable the construction of a meaningful regression-based prediction model.",
"title": ""
},
{
"docid": "b9b85e8e4824b7f0cb6443d70ef38b38",
"text": "This paper presents methods for analyzing and manipulating unevenly spaced time series without a transformation to equally spaced data. Processing and analyzing such data in its unaltered form avoids the biases and information loss caused by resampling. Care is taken to develop a framework consistent with a traditional analysis of equally spaced data, as in Brockwell and Davis (1991), Hamilton (1994) and Box, Jenkins, and Reinsel (2004).",
"title": ""
}
] | [
{
"docid": "00527294606231986ba34d68e847e01a",
"text": "In this paper, we describe a new scheme to learn dynamic user's interests in an automated information filtering and gathering system running on the Internet. Our scheme is aimed to handle multiple domains of long-term and short-term user's interests simultaneously, which is learned through positive and negative user's relevance feedback. We developed a 3-descriptor approach to represent the user's interest categories. Using a learning algorithm derived for this representation, our scheme adapts quickly to significant changes in user interest, and is also able to learn exceptions to interest categories.",
"title": ""
},
{
"docid": "d029ce85b17e37abc93ab704fbef3a98",
"text": "Video super-resolution (SR) aims to generate a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The generation of accurate correspondence plays a significant role in video SR. It is demonstrated by traditional video SR methods that simultaneous SR of both images and optical flows can provide accurate correspondences and better SR results. However, LR optical flows are used in existing deep learning based methods for correspondence generation. In this paper, we propose an endto-end trainable video SR framework to super-resolve both images and optical flows. Specifically, we first propose an optical flow reconstruction network (OFRnet) to infer HR optical flows in a coarse-to-fine manner. Then, motion compensation is performed according to the HR optical flows. Finally, compensated LR inputs are fed to a superresolution network (SRnet) to generate the SR results. Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance. Comparative results on the Vid4 and DAVIS10 datasets show that our framework achieves the stateof-the-art performance. The codes will be released soon at: https://github.com/LongguangWang/SOF-VSR-SuperResolving-Optical-Flow-for-Video-Super-Resolution-.",
"title": ""
},
{
"docid": "62e7974231c091845f908a50f5365d7f",
"text": "Sequentiality of access is an inherent characteristic of many database systems. We use this observation to develop an algorithm which selectively prefetches data blocks ahead of the point of reference. The number of blocks prefetched is chosen by using the empirical run length distribution and conditioning on the observed number of sequential block references immediately preceding reference to the current block. The optimal number of blocks to prefetch is estimated as a function of a number of “costs,” including the cost of accessing a block not resident in the buffer (a miss), the cost of fetching additional data blocks at fault times, and the cost of fetching blocks that are never referenced. We estimate this latter cost, described as memory pollution, in two ways. We consider the treatment (in the replacement algorithm) of prefetched blocks, whether they are treated as referenced or not, and find that it makes very little difference. Trace data taken from an operational IMS database system is analyzed and the results are presented. We show how to determine optimal block sizes. We find that anticipatory fetching of data can lead to significant improvements in system operation.",
"title": ""
},
{
"docid": "11cf4c50ced7ceafe7176a597f0f983d",
"text": "All mature hemopoietic lineage cells, with exclusion of platelets and mature erythrocytes, share the surface expression of a transmembrane phosphatase, the CD45 molecule. It is also present on hemopoietic stem cells and most leukemic clones and therefore presents as an appropriate target for immunotherapy with anti-CD45 antibodies. This short review details the biology of CD45 and its recent targeting for both treatment of malignant disorders and tolerance induction. In particular, the question of potential stem cell depletion for induction of central tolerance or depletion of malignant hemopoietic cells is addressed. Mechanisms underlying the effects downstream of CD45 binding to the cell surface are discussed.",
"title": ""
},
{
"docid": "62d63c1177b2426e133daca0ead7e50f",
"text": "⎯The problem of how to plan coal fuel blending and distribution from overseas coal sources to domestic power plants through some possible seaports by certain types of fleet in order to meet operational and environmental requirements is a complex task. The aspects under consideration includes each coal source contract’s supply, quality and price, each power plant’s demand, environmental requirements and limit on maximum number of different coal sources that can supply it, installation of blending facilities, selection of fleet types, and transient seaport’s capacity limit on fleet types. A coal blending and inter-model transportation model is explored to find optimal blending and distribution decisions for coal fuel from overseas contracts to domestic power plants. The objective in this study is to minimize total logistics costs, including procurement cost, shipping cost, and inland delivery cost. The developed model is one type of mix-integer zero-one programming problems. A real-world case problem is presented using the coal logistics system of a local electric utility company to demonstrate the benefit of the proposed approach. A well-known optimization package, AMPL-CPLEX, is utilized to solve this problem. Results from this study suggest that the obtained solution is better than the rule-of-thumb solution and the developed model provides a tool for management to conduct capacity expansion planning and power generation options. Keywords⎯Blending and inter-modal transportation model, Integer programming, Coal fuel. ∗ Corresponding author’s email: cmliu@fcu.edu.tw International Journal of Operations Research",
"title": ""
},
{
"docid": "8583702b48549c5bbf1553fa0e39a882",
"text": "A critical task for question answering is the final answer selection stage, which has to combine multiple signals available about each answer candidate. This paper proposes EviNets: a novel neural network architecture for factoid question answering. EviNets scores candidate answer entities by combining the available supporting evidence, e.g., structured knowledge bases and unstructured text documents. EviNets represents each piece of evidence with a dense embeddings vector, scores their relevance to the question, and aggregates the support for each candidate to predict their final scores. Each of the components is generic and allows plugging in a variety of models for semantic similarity scoring and information aggregation. We demonstrate the effectiveness of EviNets in experiments on the existing TREC QA and WikiMovies benchmarks, and on the new Yahoo! Answers dataset introduced in this paper. EviNets can be extended to other information types and could facilitate future work on combining evidence signals for joint reasoning in question answering.",
"title": ""
},
{
"docid": "493748a07dbf457e191487fe7459ee7e",
"text": "60 Computer T he Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: Taken as a whole, the set of Web pages lacks a unifying structure and shows far more author-ing style and content variation than that seen in traditional text-document collections. This level of complexity makes an \" off-the-shelf \" database-management and information-retrieval solution impossible. To date, index-based search engines for the Web have been the primary tool by which users search for information. The largest such search engines exploit technology's ability to store and index much of the Web. Such engines can therefore build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained keywords and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. Yet a user will be willing, typically , to look at only a few of these pages. How then, from this sea of pages, should a search engine select the correct ones—those of most value to the user? AUTHORITATIVE WEB PAGES First, to distill a large Web search topic to a size that makes sense to a human user, we need a means of identifying the topic's most definitive or authoritative Web pages. The notion of authority adds a crucial second dimension to the concept of relevance: We wish to locate not only a set of relevant pages, but also those relevant pages of the highest quality. Second, the Web consists not only of pages, but hyperlinks that connect one page to another. This hyperlink structure contains an enormous amount of latent human annotation that can help automatically infer notions of authority. Specifically, the creation of a hyperlink by the author of a Web page represents an implicit endorsement of the page being pointed to; by mining the collective judgment contained in the set of such endorsements, we can gain a richer understanding of the relevance and quality of the Web's contents. To address both these parameters, we began development of the Clever system 1-3 three years ago. Clever …",
"title": ""
},
{
"docid": "8cbfb79df2516bb8a06a5ae9399e3685",
"text": "We consider the problem of approximate set similarity search under Braun-Blanquet similarity <i>B</i>(<i>x</i>, <i>y</i>) = |<i>x</i> â© <i>y</i>| / max(|<i>x</i>|, |<i>y</i>|). The (<i>b</i><sub>1</sub>, <i>b</i><sub>2</sub>)-approximate Braun-Blanquet similarity search problem is to preprocess a collection of sets <i>P</i> such that, given a query set <i>q</i>, if there exists <i>x</i> â <i>P</i> with <i>B</i>(<i>q</i>, <i>x</i>) ⥠<i>b</i><sub>1</sub>, then we can efficiently return <i>x</i>â² â <i>P</i> with <i>B</i>(<i>q</i>, <i>x</i>â²) > <i>b</i><sub>2</sub>. \nWe present a simple data structure that solves this problem with space usage <i>O</i>(<i>n</i><sup>1+Ï</sup>log<i>n</i> + â<sub><i>x</i> â <i>P</i></sub>|<i>x</i>|) and query time <i>O</i>(|<i>q</i>|<i>n</i><sup>Ï</sup> log<i>n</i>) where <i>n</i> = |<i>P</i>| and Ï = log(1/<i>b</i><sub>1</sub>)/log(1/<i>b</i><sub>2</sub>). Making use of existing lower bounds for locality-sensitive hashing by OâDonnell et al. (TOCT 2014) we show that this value of Ï is tight across the parameter space, i.e., for every choice of constants 0 < <i>b</i><sub>2</sub> < <i>b</i><sub>1</sub> < 1. \nIn the case where all sets have the same size our solution strictly improves upon the value of Ï that can be obtained through the use of state-of-the-art data-independent techniques in the Indyk-Motwani locality-sensitive hashing framework (STOC 1998) such as Broderâs MinHash (CCS 1997) for Jaccard similarity and Andoni et al.âs cross-polytope LSH (NIPS 2015) for cosine similarity. Surprisingly, even though our solution is data-independent, for a large part of the parameter space we outperform the currently best data-<em>dependent</em> method by Andoni and Razenshteyn (STOC 2015).",
"title": ""
},
{
"docid": "608bf85fa593c7ddff211c5bcc7dd20a",
"text": "We introduce a composite deep neural network architecture for supervised and language independent context sensitive lemmatization. The proposed method considers the task as to identify the correct edit tree representing the transformation between a word-lemma pair. To find the lemma of a surface word, we exploit two successive bidirectional gated recurrent structures the first one is used to extract the character level dependencies and the next one captures the contextual information of the given word. The key advantages of our model compared to the state-of-the-art lemmatizers such as Lemming and Morfette are (i) it is independent of human decided features (ii) except the gold lemma, no other expensive morphological attribute is required for joint learning. We evaluate the lemmatizer on nine languages Bengali, Catalan, Dutch, Hindi, Hungarian, Italian, Latin, Romanian and Spanish. It is found that except Bengali, the proposed method outperforms Lemming and Morfette on the other languages. To train the model on Bengali, we develop a gold lemma annotated dataset1 (having 1, 702 sentences with a total of 20, 257 word tokens), which is an additional contribution of this work.",
"title": ""
},
{
"docid": "ac1b28346ae9df1dd3b455d113551caf",
"text": "The new IEEE 802.11 standard, IEEE 802.11ax, has the challenging goal of serving more Uplink (UL) traffic and users as compared with his predecessor IEEE 802.11ac, enabling consistent and reliable streams of data (average throughput) per station. In this paper we explore several new IEEE 802.11ax UL scheduling mechanisms and compare between the maximum throughputs of unidirectional UDP Multi Users (MU) triadic. The evaluation is conducted based on Multiple-Input-Multiple-Output (MIMO) and Orthogonal Frequency Division Multiple Access (OFDMA) transmission multiplexing format in IEEE 802.11ax vs. the CSMA/CA MAC in IEEE 802.11ac in the Single User (SU) and MU modes for 1, 4, 8, 16, 32 and 64 stations scenario in reliable and unreliable channels. The comparison is conducted as a function of the Modulation and Coding Schemes (MCS) in use. In IEEE 802.11ax we consider two new flavors of acknowledgment operation settings, where the maximum acknowledgment windows are 64 or 256 respectively. In SU scenario the throughputs of IEEE 802.11ax are larger than those of IEEE 802.11ac by 64% and 85% in reliable and unreliable channels respectively. In MU-MIMO scenario the throughputs of IEEE 802.11ax are larger than those of IEEE 802.11ac by 263% and 270% in reliable and unreliable channels respectively. Also, as the number of stations increases, the advantage of IEEE 802.11ax in terms of the access delay also increases.",
"title": ""
},
{
"docid": "2383c90591822bc0c8cec2b1b2309b7a",
"text": "Apple's iPad has attracted a lot of attention since its release in 2010 and one area in which it has been adopted is the education sector. The iPad's large multi-touch screen, sleek profile and the ability to easily download and purchase a huge variety of educational applications make it attractive to educators. This paper presents a case study of the iPad's adoption in a primary school, one of the first in the world to adopt it. From interviews with teachers and IT staff, we conclude that the iPad's main strengths are the way in which it provides quick and easy access to information for students and the support it provides for collaboration. However, staff need to carefully manage both the teaching and the administrative environment in which the iPad is used, and we provide some lessons learned that can help other schools considering adopting the iPad in the classroom.",
"title": ""
},
{
"docid": "c7fb516fbba3293c92a00beaced3e95e",
"text": "Latent Dirichlet Allocation (LDA) is a generative model describing the observed data as being composed of a mixture of underlying unobserved topics, as introduced by Blei et al. (2003). A key hyperparameter of LDA is the number of underlying topics k, which must be estimated empirically in practice. Selecting the appropriate value of k is essentially selecting the correct model to represent the data; an important issue concerning the goodness of fit. We examine in the current work a series of metrics from literature on a quantitative basis by performing benchmarks against a generated dataset with a known value of k and evaluate the ability of each metric to recover the true value, varying over multiple levels of topic resolution in the Dirichlet prior distributions. Finally, we introduce a new metric and heuristic for estimating k and demonstrate improved performance over existing metrics from the literature on several benchmarks.",
"title": ""
},
{
"docid": "f03cc92b0bc69845b9f2b6c0c6f3168b",
"text": "Relational database management systems (RDBMSs) are powerful because they are able to optimize and answer queries against any relational database. A natural language interface (NLI) for a database, on the other hand, is tailored to support that specific database. In this work, we introduce a general purpose transfer-learnable NLI with the goal of learning one model that can be used as NLI for any relational database. We adopt the data management principle of separating data and its schema, but with the additional support for the idiosyncrasy and complexity of natural languages. Specifically, we introduce an automatic annotation mechanism that separates the schema and the data, where the schema also covers knowledge about natural language. Furthermore, we propose a customized sequence model that translates annotated natural language queries to SQL statements. We show in experiments that our approach outperforms previous NLI methods on the WikiSQL dataset and the model we learned can be applied to another benchmark dataset OVERNIGHT without retraining.",
"title": ""
},
{
"docid": "6c7172b5c91601646a7cdc502c88d22f",
"text": "In this paper, a number of options and issues are illustrated which companies and organizations seeking to incorporate environmental issues in product design and realization should consider. A brief overview and classification of a number of approaches for reducing the environmental impact is given, as well as their organizational impact. General characteristics, representative examples, and integration and information management issues of design tools supporting environmentally conscious product design are provided as well. 1 From Design for Manufacture to Design for the Life Cycle and Beyond One can argue that the “good old days” where a product was being designed, manufactured and sold to the customer with little or no subsequent concern are over. In the seventies, with the emergence of life-cycle engineering and concurrent engineering in the United States, companies became more aware of the need to include serviceability and maintenance issues in their design processes. A formal definition for Concurrent Engineering is given in (Winner, et al., 1988), as “a systematic approach to the integrated, concurrent design of products and their related processes, including manufacturing and support. This approach is intended to cause the developers, from the outset, to consider all elements of the product life cycle from conception through disposal, including quality, cost, schedule, and user requirements.” Although concurrent engineering seems to span the entire life-cycle of a product according to the preceding definition, its traditional focus has been on design, manufacturing, and maintenance. Perhaps one of the most striking areas where companies now have to be concerned is with the environment. The concern regarding environmental impact stems from the fact that, whether we want it or not, all our products affect in some way our environment during their life-span. In Figure 1, a schematic representation of a system’s life-cycle is given. Materials are mined from the earth, air and sea, processed into products, and distributed to consumers for usage, as represented by the flow from left to right in the top half of Figure 1.",
"title": ""
},
{
"docid": "962a653490e8afbcf13c47426c85ecec",
"text": "Alzheimer’s disease (AD) and mild cognitive impairment (MCI) are the most prevalent neurodegenerative brain diseases in elderly population. Recent studies on medical imaging and biological data have shown morphological alterations of subcortical structures in patients with these pathologies. In this work, we take advantage of these structural deformations for classification purposes. First, triangulated surface meshes are extracted from segmented hippocampus structures in MRI and point-to-point correspondences are established among population of surfaces using a spectral matching method. Then, a deep learning variational auto-encoder is applied on the vertex coordinates of the mesh models to learn the low dimensional feature representation. A multi-layer perceptrons using softmax activation is trained simultaneously to classify Alzheimer’s patients from normal subjects. Experiments on ADNI dataset demonstrate the potential of the proposed method in classification of normal individuals from early MCI (EMCI), late MCI (LMCI), and AD subjects with classification rates outperforming standard SVM based approach.",
"title": ""
},
{
"docid": "7ab232fbbda235c42e0dabb2b128ed59",
"text": "Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.",
"title": ""
},
{
"docid": "4b012d1dc18f18118a73488e934eff4d",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: s u m m a r y Current drought information is based on indices that do not capture the joint behaviors of hydrologic variables. To address this limitation, the potential of copulas in characterizing droughts from multiple variables is explored in this study. Starting from the standardized index (SI) algorithm, a modified index accounting for seasonality is proposed for precipitation and streamflow marginals. Utilizing Indiana stations with long-term observations (a minimum of 80 years for precipitation and 50 years for streamflow), the dependence structures of precipitation and streamflow marginals with various window sizes from 1-to 12-months are constructed from empirical copulas. A joint deficit index (JDI) is defined by using the distribution function of copulas. This index provides a probability-based description of the overall drought status. Not only is the proposed JDI able to reflect both emerging and prolonged droughts in a timely manner, it also allows a month-by-month drought assessment such that the required amount of precipitation for achieving normal conditions in future can be computed. The use of JDI is generalizable to other hydrologic variables as evidenced by similar drought severities gleaned from JDIs constructed separately from precipitation and streamflow data. JDI further allows the construction of an inter-variable drought index, where the entire dependence structure of precipitation and streamflow marginals is preserved. Introduction Drought, as a prolonged status of water deficit, has been a challenging topic in water resources management. It is perceived as one of the most expensive and least understood natural disasters. In monetary terms, a typical drought costs American farmers and businesses $6–8 billion each year (WGA, 2004), more than damages incurred from floods and hurricanes. The consequences tend to be more severe in areas such as the mid-western part of the United States, where agriculture is the major economic driver. Unfortunately , though there is a strong need to develop an algorithm for characterizing and predicting droughts, it cannot be achieved easily either through physical or statistical analyses. The main obstacles are identification of complex drought-causing mechanisms, and lack of a precise (universal) scientific definition for droughts. When a drought event occurs, moisture deficits are observed in many hydrologic variables, such as precipitation, …",
"title": ""
},
{
"docid": "8ed247a04a8e5ab201807e0d300135a3",
"text": "We reproduce the Structurally Constrained Recurrent Network (SCRN) model, and then regularize it using the existing widespread techniques, such as naïve dropout, variational dropout, and weight tying. We show that when regularized and optimized appropriately the SCRN model can achieve performance comparable with the ubiquitous LSTMmodel in language modeling task on English data, while outperforming it on non-English data. Title and Abstract in Russian Воспроизведение и регуляризация SCRN модели Мы воспроизводим структурно ограниченную рекуррентную сеть (SCRN), а затем добавляем регуляризацию, используя существующие широко распространенные методы, такие как исключение (дропаут), вариационное исключение и связка параметров. Мы показываем, что при правильной регуляризации и оптимизации показатели SCRN сопоставимы с показателями вездесущей LSTM в задаче языкового моделирования на английских текстах, а также превосходят их на неанглийских данных.",
"title": ""
},
{
"docid": "b518deb76d6a59f6b88d58b563100f4b",
"text": "As part of the 50th anniversary of the Canadian Operational Research Society, we reviewed queueing applications by Canadian researchers and practitioners. We concentrated on finding real applications, but also considered theoretical contributions to applied areas that have been developed by the authors based on real applications. There were a surprising number of applications, many not well documented. Thus, this paper features examples of queueing theory applications over a spectrum of areas, years and types. One conclusion is that some of the successful queueing applications were achieved and ameliorated by using simple principles gained from studying queues and not by complex mathematical models.",
"title": ""
},
{
"docid": "f9692d0410cb97fd9c2ecf6f7b043b9f",
"text": "This paper develops and analyzes four energy scenarios for California that are both exploratory and quantitative. The businessas-usual scenario represents a pathway guided by outcomes and expectations emerging from California’s energy crisis. Three alternative scenarios represent contexts where clean energy plays a greater role in California’s energy system: Split Public is driven by local and individual activities; Golden State gives importance to integrated state planning; Patriotic Energy represents a national drive to increase energy independence. Future energy consumption, composition of electricity generation, energy diversity, and greenhouse gas emissions are analyzed for each scenario through 2035. Energy savings, renewable energy, and transportation activities are identified as promising opportunities for achieving alternative energy pathways in California. A combined approach that brings together individual and community activities with state and national policies leads to the largest energy savings, increases in energy diversity, and reductions in greenhouse gas emissions. Critical challenges in California’s energy pathway over the next decades identified by the scenario analysis include dominance of the transportation sector, dependence on fossil fuels, emissions of greenhouse gases, accounting for electricity imports, and diversity of the electricity sector. The paper concludes with a set of policy lessons revealed from the California energy scenarios. r 2003 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | scidocsrr |
1f4cf2423f05ef835580dd2811cf2555 | Putting Your Best Face Forward : The Accuracy of Online Dating Photographs | [
{
"docid": "34fb2f437c5135297ec2ad52556440e9",
"text": "This study investigates self-disclosure in the novel context of online dating relationships. Using a national random sample of Match.com members (N = 349), the authors tested a model of relational goals, self-disclosure, and perceived success in online dating. The authors’findings provide support for social penetration theory and the social information processing and hyperpersonal perspectives as well as highlight the positive effect of anticipated future face-to-face interaction on online self-disclosure. The authors find that perceived online dating success is predicted by four dimensions of self-disclosure (honesty, amount, intent, and valence), although honesty has a negative effect. Furthermore, online dating experience is a strong predictor of perceived success in online dating. Additionally, the authors identify predictors of strategic success versus self-presentation success. This research extends existing theory on computer-mediated communication, selfdisclosure, and relational success to the increasingly important arena of mixed-mode relationships, in which participants move from mediated to face-to-face communication.",
"title": ""
},
{
"docid": "47aec03cf18dc3abd4d46ee017f25a16",
"text": "Cues of phenotypic condition should be among those used by women in their choice of mates. One marker of better phenotypic condition is thought to be symmetrical bilateral body and facial features. However, it is not clear whether women use symmetry as the primary cue in assessing the phenotypic quality of potential mates or whether symmetry is correlated with other facial markers affecting physical attractiveness. Using photographs of men's faces, for which facial symmetry had been measured, we found a relationship between women's attractiveness ratings of these faces and symmetry, but the subjects could not rate facial symmetry accurately. Moreover, the relationship between facial attractiveness and symmetry was still observed, even when symmetry cues were removed by presenting only the left or right half of faces. These results suggest that attractive features other than symmetry can be used to assess phenotypic condition. We identified one such cue, facial masculinity (cheek-bone prominence and a relatively longer lower face), which was related to both symmetry and full- and half-face attractiveness.",
"title": ""
}
] | [
{
"docid": "401bad1d0373acb71a855a28d2aeea38",
"text": "mechanobullous epidermolysis bullosa acquisita to combined treatment with immunoadsorption and rituximab (anti-CD20 monoclonal antibodies). Arch Dermatol 2007; 143: 192–198. 6 Sadler E, Schafleitner B, Lanschuetzer C et al. Treatment-resistant classical epidermolysis bullosa acquisita responding to rituximab. Br J Dermatol 2007; 157: 417–419. 7 Crichlow SM, Mortimer NJ, Harman KE. A successful therapeutic trial of rituximab in the treatment of a patient with recalcitrant, high-titre epidermolysis bullosa acquisita. Br J Dermatol 2007; 156: 194–196. 8 Saha M, Cutler T, Bhogal B, Black MM, Groves RW. Refractory epidermolysis bullosa acquisita: successful treatment with rituximab. Clin Exp Dermatol 2009; 34: e979–e980. 9 Kubisch I, Diessenbacher P, Schmidt E, Gollnick H, Leverkus M. Premonitory epidermolysis bullosa acquisita mimicking eyelid dermatitis: successful treatment with rituximab and protein A immunoapheresis. Am J Clin Dermatol 2010; 11: 289–293. 10 Meissner C, Hoefeld-Fegeler M, Vetter R et al. Severe acral contractures and nail loss in a patient with mechano-bullous epidermolysis bullosa acquisita. Eur J Dermatol 2010; 20: 543–544.",
"title": ""
},
{
"docid": "91c0658dbd6f078fdf53e9ae276a6f73",
"text": "Given a photo collection of \"unconstrained\" face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.",
"title": ""
},
{
"docid": "41a0b9797c556368f84e2a05b80645f3",
"text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.",
"title": ""
},
{
"docid": "75e9253b7c6333db1aa3cef2ab364f99",
"text": "We used single-pulse transcranial magnetic stimulation of the left primary hand motor cortex and motor evoked potentials of the contralateral right abductor pollicis brevis to probe motor cortex excitability during a standard mental rotation task. Based on previous findings we tested the following hypotheses. (i) Is the hand motor cortex activated more strongly during mental rotation than during reading aloud or reading silently? The latter tasks have been shown to increase motor cortex excitability substantially in recent studies. (ii) Is the recruitment of the motor cortex for mental rotation specific for the judgement of rotated but not for nonrotated Shepard & Metzler figures? Surprisingly, motor cortex activation was higher during mental rotation than during verbal tasks. Moreover, we found strong motor cortex excitability during the mental rotation task but significantly weaker excitability during judgements of nonrotated figures. Hence, this study shows that the primary hand motor area is generally involved in mental rotation processes. These findings are discussed in the context of current theories of mental rotation, and a likely mechanism for the global excitability increase in the primary motor cortex during mental rotation is proposed.",
"title": ""
},
{
"docid": "90b6b0ff4b60e109fc111b26aab4a25c",
"text": "Due to its damage to Internet security, malware and its detection has caught the attention of both anti-malware industry and researchers for decades. Many research efforts have been conducted on developing intelligent malware detection systems. In these systems, resting on the analysis of file contents extracted from the file samples, like Application Programming Interface (API) calls, instruction sequences, and binary strings, data mining methods such as Naive Bayes and Support Vector Machines have been used for malware detection. However, driven by the economic benefits, both diversity and sophistication of malware have significantly increased in recent years. Therefore, anti-malware industry calls for much more novel methods which are capable to protect the users against new threats, and more difficult to evade. In this paper, other than based on file contents extracted from the file samples, we study how file relation graphs can be used for malware detection and propose a novel Belief Propagation algorithm based on the constructed graphs to detect newly unknown malware. A comprehensive experimental study on a real and large data collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our proposed method outperform other alternate data mining based detection techniques.",
"title": ""
},
{
"docid": "703696ca3af2a485ac34f88494210007",
"text": "Cells navigate environments, communicate and build complex patterns by initiating gene expression in response to specific signals. Engineers seek to harness this capability to program cells to perform tasks or create chemicals and materials that match the complexity seen in nature. This Review describes new tools that aid the construction of genetic circuits. Circuit dynamics can be influenced by the choice of regulators and changed with expression 'tuning knobs'. We collate the failure modes encountered when assembling circuits, quantify their impact on performance and review mitigation efforts. Finally, we discuss the constraints that arise from circuits having to operate within a living cell. Collectively, better tools, well-characterized parts and a comprehensive understanding of how to compose circuits are leading to a breakthrough in the ability to program living cells for advanced applications, from living therapeutics to the atomic manufacturing of functional materials.",
"title": ""
},
{
"docid": "2bb535ff25532ccdbf85a301a872c8bd",
"text": "Simultaneous Localization and Mapping (SLAM) consists in the concurrent construction of a representation of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. The paper serves as a tutorial for the non-expert reader. It is also a position paper: by looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: do robots need SLAM? Is SLAM solved?",
"title": ""
},
{
"docid": "3c8ac7bd31d133b4d43c0d3a0f08e842",
"text": "How we teach and learn is undergoing a revolution, due to changes in technology and connectivity. Education may be one of the best application areas for advanced NLP techniques, and NLP researchers have much to contribute to this problem, especially in the areas of learning to write, mastery learning, and peer learning. In this paper I consider what happens when we convert natural language processors into natural language coaches. 1 Why Should You Care, NLP Researcher? There is a revolution in learning underway. Students are taking Massive Open Online Courses as well as online tutorials and paid online courses. Technology and connectivity makes it possible for students to learn from anywhere in the world, at any time, to fit their schedules. And in today’s knowledge-based economy, going to school only in one’s early years is no longer enough; in future most people are going to need continuous, lifelong education. Students are changing too — they expect to interact with information and technology. Fortunately, pedagogical research shows significant benefits of active learning over passive methods. The modern view of teaching means students work actively in class, talk with peers, and are coached more than graded by their instructors. In this new world of education, there is a great need for NLP research to step in and help. I hope in this paper to excite colleagues about the possibilities and suggest a few new ways of looking at them. I do not attempt to cover the field of language and learning comprehensively, nor do I claim there is no work in the field. In fact there is quite a bit, such as a recent special issue on language learning resources (Sharoff et al., 2014), the long running ACL workshops on Building Educational Applications using NLP (Tetreault et al., 2015), and a recent shared task competition on grammatical error detection for second language learners (Ng et al., 2014). But I hope I am casting a few interesting thoughts in this direction for those colleagues who are not focused on this particular topic.",
"title": ""
},
{
"docid": "40df4f2d0537bca3cf92dc3005d2b9f3",
"text": "The pages of this Sample Chapter may have slight variations in final published form. H istorically, we talk of first-force psychodynamic, second-force cognitive-behavioral, and third-force existential-humanistic counseling and therapy theories. Counseling and psychotherapy really began with Freud and psychoanalysis. James Watson and, later, B. F. Skinner challenged Freud's emphasis on the unconscious and focused on observable behavior. Carl Rogers, with his person-centered counseling, revolutionized the helping professions by focusing on the importance of nurturing a caring therapist-client relationship in the helping process. All three approaches are still alive and well in the fields of counseling and psychology, as discussed in Chapters 5 through 10. As you reflect on the new knowledge and skills you exercised by reading the preceding chapters and completing the competency-building activities in those chapters, hopefully you part three 319 will see that you have gained a more sophisticated foundational understanding of the three traditional theoretical forces that have shaped the fields of counseling and therapy over the past one hundred years. Efforts in this book have been intended to bring your attention to both the strengths and limitations of psychodynamic, cognitive-behavioral, and existential-humanistic perspectives. With these perspectives in mind, the following chapters examine the fourth major theoretical force that has emerged in the mental health professions over the past 40 years: the multicultural-feminist-social justice counseling world-view. The perspectives of the fourth force challenge you to learn new competencies you will need to acquire to work effectively, respectfully, and ethically in a culturally diverse 21st-century society. Part Three begins by discussing the rise of the feminist counseling and therapy perspective (Chapter 11) and multicultural counseling and therapy (MCT) theories (Chapter 12). To assist you in synthesizing much of the information contained in all of the preceding chapters, Chapter 13 presents a comprehensive and integrative helping theory referred to as developmental counseling and therapy (DCT). Chapter 14 offers a comprehensive examination of family counseling and therapy theories to further extend your knowledge of ways that mental health practitioners can assist entire families in realizing new and untapped dimensions of their collective well-being. Finally Chapter 15 provides guidelines to help you develop your own approach to counseling and therapy that complements a growing awareness of your own values, biases, preferences, and relational compe-tencies as a mental health professional. Throughout, competency-building activities offer you opportunities to continue to exercise new skills associated with the different theories discussed in Part Three. …",
"title": ""
},
{
"docid": "21f45ec969ba3852d731a2e2119fc86e",
"text": "When a large number of people with heterogeneous knowledge and skills run a project together, it is important to use a sensible engineering process. This especially holds for a project building an intelligent autonomously driving car to participate in the 2007 DARPA Urban Challenge. In this article, we present essential elements of a software and systems engineering process for the development of artificial intelligence capable of driving autonomously in complex urban situations. The process includes agile concepts, like test first approach, continuous integration of every software module and a reliable release and configuration management assisted by software tools in integrated development environments. However, the most important ingredients for an efficient and stringent development are the ability to efficiently test the behavior of the developed system in a flexible and modular simulator for urban situations.",
"title": ""
},
{
"docid": "3df76261ff7981794e9c3d1332efe023",
"text": "The complete sequence of the 16,569-base pair human mitochondrial genome is presented. The genes for the 12S and 16S rRNAs, 22 tRNAs, cytochrome c oxidase subunits I, II and III, ATPase subunit 6, cytochrome b and eight other predicted protein coding genes have been located. The sequence shows extreme economy in that the genes have none or only a few noncoding bases between them, and in many cases the termination codons are not coded in the DNA but are created post-transcriptionally by polyadenylation of the mRNAs.",
"title": ""
},
{
"docid": "a412c41fe943120a513ad9b6fb70cb8b",
"text": "Blockchains based on proofs of work (PoW) currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. The security of PoWbased blockchains requires that new transactions are verified, making a proper replication of the blockchain data in the system essential. While existing PoW mining protocols offer considerable incentives for workers to generate blocks, workers do not have any incentives to store the blockchain. This resulted in a sharp decrease in the number of full nodes that store the full blockchain, e.g., in Bitcoin, Litecoin, etc. However, the smaller is the number of replicas or nodes storing the replicas, the higher is the vulnerability of the system against compromises and DoS-attacks. In this paper, we address this problem and propose a novel solution, EWoK (Entangled proofs of WOrk and Knowledge). EWoK regulates in a decentralized-manner the minimum number of replicas that should be stored by tying replication to the only directly-incentivized process in PoW-blockchains—which is PoW itself. EWoK only incurs small modifications to existing PoW protocols, and is fully compliant with the specifications of existing mining hardware—which is likely to increase its adoption by the existing PoW ecosystem. EWoK plugs an efficient in-memory hash-based proof of knowledge and couples them with the standard PoW mechanism. We implemented EWoK and integrated it within commonly used mining protocols, such as GetBlockTemplate and Stratum mining; our results show that EWoK can be easily integrated within existing mining pool protocols and does not impair the mining efficiency.",
"title": ""
},
{
"docid": "f415b38e6d43c8ed81ce97fd924def1b",
"text": "Collaborative filtering is one of the most successful and widely used methods of automated product recommendation in online stores. The most critical component of the method is the mechanism of finding similarities among users using product ratings data so that products can be recommended based on the similarities. The calculation of similarities has relied on traditional distance and vector similarity measures such as Pearson’s correlation and cosine which, however, have been seldom questioned in terms of their effectiveness in the recommendation problem domain. This paper presents a new heuristic similarity measure that focuses on improving recommendation performance under cold-start conditions where only a small number of ratings are available for similarity calculation for each user. Experiments using three different datasets show the superiority of the measure in new user cold-start conditions. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a33f962c4a6ea61d3400ca9feea50bd7",
"text": "Now, we come to offer you the right catalogues of book to open. artificial intelligence techniques for rational decision making is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.",
"title": ""
},
{
"docid": "b41ee70f93fe7c52f4fc74727f43272e",
"text": "It is no secret that pornographic material is now a one-clickaway from everyone, including children and minors. General social media networks are striving to isolate adult images and videos from normal ones. Intelligent image analysis methods can help to automatically detect and isolate questionable images in media. Unfortunately, these methods require vast experience to design the classifier including one or more of the popular computer vision feature descriptors. We propose to build a classifier based on one of the recently flourishing deep learning techniques. Convolutional neural networks contain many layers for both automatic features extraction and classification. The benefit is an easier system to build (no need for hand-crafting features and classifiers). Additionally, our experiments show that it is even more accurate than the state of the art methods on the most recent benchmark dataset.",
"title": ""
},
{
"docid": "ea86e4d0581dc3be3f3671cf25b064ae",
"text": "Transfer learning allows leveraging the knowledge of source domains, available a priori, to help training a classifier for a target domain, where the available data is scarce. The effectiveness of the transfer is affected by the relationship between source and target. Rather than improving the learning, brute force leveraging of a source poorly related to the target may decrease the classifier performance. One strategy to reduce this negative transfer is to import knowledge from multiple sources to increase the chance of finding one source closely related to the target. This work extends the boosting framework for transferring knowledge from multiple sources. Two new algorithms, MultiSource-TrAdaBoost, and TaskTrAdaBoost, are introduced, analyzed, and applied for object category recognition and specific object detection. The experiments demonstrate their improved performance by greatly reducing the negative transfer as the number of sources increases. TaskTrAdaBoost is a fast algorithm enabling rapid retraining over new targets.",
"title": ""
},
{
"docid": "eb34d154a1547db6e0a9612abc0adcf3",
"text": "Soft robots are challenging to model due to their nonlinear behavior. However, their soft bodies make it possible to safely observe their behavior under random control inputs, making them amenable to large-scale data collection and system identification. This paper implements and evaluates a system identification method based on Koopman operator theory. This theory offers a way to represent a nonlinear system as a linear system in the infinite-dimensional space of real-valued functions called observables, enabling models of nonlinear systems to be constructed via linear regression of observed data. The approach does not suffer from some of the shortcomings of other nonlinear system identification methods, which typically require the manual tuning of training parameters and have limited convergence guarantees. A dynamic model of a pneumatic soft robot arm is constructed via this method, and used to predict the behavior of the real system. The total normalized-root-mean-square error (NRMSE) of its predictions over twelve validation trials is lower than that of several other identified models including a neural network, NLARX, nonlinear Hammerstein-Wiener, and linear state space model.",
"title": ""
},
{
"docid": "9634245d2a71804083fa90a6555d13a8",
"text": "In far-field speech recognition systems, training acoustic models with alignments generated from parallel close-talk microphone data provides significant improvements. However it is not practical to assume the availability of large corpora of parallel close-talk microphone data, for training. In this paper we explore methods to reduce the performance gap between far-field ASR systems trained with alignments from distant microphone data and those trained with alignments from parallel close-talk microphone data. These methods include the use of a lattice-free sequence objective function which tolerates minor mis-alignment errors; and the use of data selection techniques to discard badly aligned data. We present results on single distant microphone and multiple distant microphone scenarios of the AMI LVCSR task. We identify prominent causes of alignment errors in AMI data.",
"title": ""
},
{
"docid": "05a35ab061a0d5ce18a3ceea8dde78f6",
"text": "A single feed grid array antenna for 24 GHz Doppler sensor is proposed in this paper. It is designed on 0.787 mm thick substrate made of Rogers Duroid 5880 (ε<sub>r</sub>= 2.2 and tan δ= 0.0009) with 0.017 mm copper claddings. Dimension of the antenna is 60 mm × 60 mm × 0.787 mm. This antenna exhibits 2.08% impedance bandwidth, 6.25% radiation bandwidth and 20.6 dBi gain at 24.2 GHz. The beamwidth is 14°and 16°in yoz and xoz planes, respectively.",
"title": ""
},
{
"docid": "ff18792f352429df42358d6b435ae813",
"text": "Recently, micro-expression recognition has seen an increase of interest from psychological and computer vision communities. As microexpressions are generated involuntarily on a person’s face, and are usually a manifestation of repressed feelings of the person. Most existing works pay attention to either the detection or spotting of micro-expression frames or the categorization of type of micro-expression present in a short video shot. In this paper, we introduced a novel automatic approach to micro-expression recognition from long video that combines both spotting and recognition mechanisms. To achieve this, the apex frame, which provides the instant when the highest intensity of facial movement occurs, is first spotted from the entire video sequence. An automatic eye masking technique is also presented to improve the robustness of apex frame spotting. With the single apex, we describe the spotted micro-expression instant using a state-of-the-art feature extractor before proceeding to classification. This is the first known work that recognizes micro-expressions from a long video sequence without the knowledge of onset and offset frames, which are typically used to determine a cropped sub-sequence containing the micro-expression. We evaluated the spotting and recognition tasks on four spontaneous micro-expression databases comprising only of raw long videos – CASME II-RAW, SMICE-HS, SMIC-E-VIS and SMIC-E-NIR. We obtained compelling results that show the effectiveness of the proposed approach, which outperform most methods that rely on human annotated sub-sequences.",
"title": ""
}
] | scidocsrr |
bd6507e6373311ec32b58755a49880a5 | An Infinite Hidden Markov Model for Short-term Interest Rates ∗ | [
{
"docid": "c9cc65bb205cd758654245d69e467d45",
"text": "Résumé This study considers the time series behavior of the U.S. real interest rate from 1961 to 1986. We provide a statistical characterization of the series using the methodology of Hamilton (1989), by allowing three possible regimes affecting both the mean and variance of the series. The results suggest that the ex-post real interest rate is essentially random around a mean that is different for the periods 1961-1973, 1973-1980 and 1980-1986. The variance of the process is also different in these episodes being higher in both the 1973-1980 and 1980-1986 sub-periods. The inflation rate series is also analyzed using a three regime framework and again our results show interesting patterns with shifts in both mean and variance. Various model selection tests are run and both an ex-ante real interest rate and an expected inflation series are constructed. Finally, we make clear how our results can explain some recent findings in the literature. Cette étude s'intéresse au comportement des séries du taux d'intérêt réel américain de 1961 à 1986. En utilisant la méthodologie d'Hamilton (1989), la modélisation statistique des séries se fait en postulant trois régimes possibles affectant la moyenne et la variance de celles-ci. Les résultats suggèrent que le taux d'intérêt réel ex-post est essentiellement un processus non corrélé et centré sur une moyenne qui diffère sur les périodes 1961-1973, 1973-1980 et 1980-1986. La variance du processus est aussi différente pour chacune de ces périodes, étant plus élevée dans les sous périodes 1973-1980 et 1980-1986. Les séries du taux d'inflation sont aussi analysées à la lumière de ce modèle à trois régimes et les résultats traduisent encore un comportement intéressant de celles-ci, avec des changements dans la moyenne et la variance. Différents tests de spécification sont utilisés et des séries, à la fois du taux d'intérêt réel ex-ante et de l'inflation anticipée, sont construites. Enfin, Il est montré comment ces résultats peuvent expliquer certaines conclusion récentes de la littérature.",
"title": ""
}
] | [
{
"docid": "8eb15b09807c1c26b7fbd8b73e11ab2b",
"text": "The work of managers in small and medium-sized enterprises is very information-intensive and the environment in which it is done is very information rich. But are managers able to exploit the wealth of information which surrounds them? And how can information be managed in organisations so that its potential for improving business performance and enhancing the competitiveness of these enterprises can be realised? Answers to these questions lie in clarifying the context of the practice of information management by exploring aspects of organisations and managerial work and in exploring the nature of information at the level of the organisation and the individual manager. From these answers it is possible to suggest some guidelines for managing the integration of business strategy and information, the adoption of a broadly-based definition of information and the development of information capabilities.",
"title": ""
},
{
"docid": "89835907e8212f7980c35ae12d711339",
"text": "In this letter, a novel ultra-wideband (UWB) bandpass filter with compact size and improved upper-stopband performance has been studied and implemented using multiple-mode resonator (MMR). The MMR is formed by attaching three pairs of circular impedance-stepped stubs in shunt to a high impedance microstrip line. By simply adjusting the radius of the circles of the stubs, the resonant modes of the MMR can be roughly allocated within the 3.1-10.6 GHz UWB band while suppressing the spurious harmonics in the upper-stopband. In order to enhance the coupling degree, two interdigital coupled-lines are used in the input and output sides. Thus, a predicted UWB passband is realized. Meanwhile, the insertion loss is higher than 30.0 dB in the upper-stopband from 12.1 to 27.8 GHz. Finally, the filter is successfully designed and fabricated. The EM-simulated and the measured results are presented in this work where excellent agreement between them is obtained.",
"title": ""
},
{
"docid": "81e9b0223d1f5ca74738646ca1f31ca9",
"text": "Limit studies on Dynamic Voltage and Frequency Scaling (DVFS) provide apparently contradictory conclusions. On the one hand early limit studies report that DVFS is effective at large timescales (on the order of million(s) of cycles) with large scaling overheads (on the order of tens of microseconds), and they conclude that there is no need for small overhead DVFS at small timescales. Recent work on the other hand—motivated by the surge of on-chip voltage regulator research—explores the potential of fine-grained DVFS and reports substantial energy savings at timescales of hundreds of cycles (while assuming no scaling overhead).\n This article unifies these apparently contradictory conclusions through a DVFS limit study that simultaneously explores timescale and scaling speed. We find that coarse-grained DVFS is unaffected by timescale and scaling speed, however, fine-grained DVFS may lead to substantial energy savings for memory-intensive workloads. Inspired by these insights, we subsequently propose a fine-grained microarchitecture-driven DVFS mechanism that scales down voltage and frequency upon individual off-chip memory accesses using on-chip regulators. Fine-grained DVFS reduces energy consumption by 12% on average and up to 23% over a collection of memory-intensive workloads for an aggressively clock-gated processor, while incurring an average 0.08% performance degradation (and at most 0.14%). We also demonstrate that the proposed fine-grained DVFS mechanism is orthogonal to existing coarse-grained DVFS policies, and further reduces energy by 6% on average and up to 11% for memory-intensive applications with limited performance impact (at most 0.7%).",
"title": ""
},
{
"docid": "c94001a32f92f5f9125f3118b0640644",
"text": "Traditional remote-server-exploiting malware is quickly evolving and adapting to the new web-centric computing paradigm. By leveraging the large population of (insecure) web sites and exploiting the vulnerabilities at client-side modern (complex) browsers (and their extensions), web-based malware becomes one of the most severe and common infection vectors nowadays. While traditional malware collection and analysis are mainly focusing on binaries, it is important to develop new techniques and tools for collecting and analyzing web-based malware, which should include a complete web-based malicious logic to reflect the dynamic, distributed, multi-step, and multi-path web infection trails, instead of just the binaries executed at end hosts. This paper is a first attempt in this direction to automatically collect web-based malware scenarios (including complete web infection trails) to enable fine-grained analysis. Based on the collections, we provide the capability for offline \"live\" replay, i.e., an end user (e.g., an analyst) can faithfully experience the original infection trail based on her current client environment, even when the original malicious web pages are not available or already cleaned. Our evaluation shows that WebPatrol can collect/cover much more complete infection trails than state-of-the-art honeypot systems such as PHoneyC [11] and Capture-HPC [1]. We also provide several case studies on the analysis of web-based malware scenarios we have collected from a large national education and research network, which contains around 35,000 web sites.",
"title": ""
},
{
"docid": "05db9a684a537fdf1234e92047618e18",
"text": "Globally the internet is been accessed by enormous people within their restricted domains. When the client and server exchange messages among each other, there is an activity that can be observed in log files. Log files give a detailed description of the activities that occur in a network that shows the IP address, login and logout durations, the user's behavior etc. There are several types of attacks occurring from the internet. Our focus of research in this paper is Denial of Service (DoS) attacks with the help of pattern recognition techniques in data mining. Through which the Denial of Service attack is identified. Denial of service is a very dangerous attack that jeopardizes the IT resources of an organization by overloading with imitation messages or multiple requests from unauthorized users.",
"title": ""
},
{
"docid": "86e2873956b79e6bc9826763096e639c",
"text": "ever do anything that is a waste of time – and be prepared to wage long, tedious wars over this principle, \" said Michael O'Connor, project manager at Trimble Navigation in Christchurch, New Zealand. This product group at Trimble is typical of the homegrown approach to agile software development methodologies. While interest in agile methodologies has blossomed in the past two years, its roots go back more than a decade. Teams using early versions of Scrum, Dynamic Systems Development Methodology (DSDM), and adaptive software development (ASD) were delivering successful projects in the early-to mid-1990s. This article attempts to answer the question, \" What constitutes agile software development? \" Because of the breadth of agile approaches and the people who practice them, this is not as easy a question to answer as one might expect. I will try to answer this question by first focusing on the sweet-spot problem domain for agile approaches. Then I will delve into the three dimensions that I refer to as agile ecosystems: barely sufficient methodology, collaborative values, and chaordic perspective. Finally, I will examine several of these agile ecosystems. All problems are different and require different strategies. While battlefield commanders plan extensively, they realize that plans are just a beginning; probing enemy defenses (creating change) and responding to enemy actions (responding to change) are more important. Battlefield commanders succeed by defeating the enemy (the mission), not conforming to a plan. I cannot imagine a battlefield commander saying, \" We lost the battle, but by golly, we were successful because we followed our plan to the letter. \" Battlefields are messy, turbulent, uncertain, and full of change. No battlefield commander would say, \" If we just plan this battle long and hard enough, and put repeatable processes in place, we can eliminate change early in the battle and not have to deal with it later on. \" A growing number of software projects operate in the equivalent of a battle zone – they are extreme projects. This is where agile approaches shine. Project teams operating in this zone attempt to utilize leading or bleeding-edge technologies , respond to erratic requirements changes, and deliver products quickly. Projects may have a relatively clear mission , but the specific requirements can be volatile and evolving as customers and development teams alike explore the unknown. These projects, which I call high-exploration factor projects, do not succumb to rigorous, plan-driven methods. …",
"title": ""
},
{
"docid": "e60d699411055bf31316d468226b7914",
"text": "Tabular data is difficult to analyze and to search through, yielding for new tools and interfaces that would allow even non tech-savvy users to gain insights from open datasets without resorting to specialized data analysis tools and without having to fully understand the dataset structure. The goal of our demonstration is to showcase answering natural language questions from tabular data, and to discuss related system configuration and model training aspects. Our prototype is publicly available and open-sourced (see demo )",
"title": ""
},
{
"docid": "5b2a088f0f53b2a960c1ebad0f9e7251",
"text": "The detailed balance method for calculating the radiative recombination limit to the performance of solar cells has been extended to include free carrier absorption and Auger recombination in addition to radiative losses. This method has been applied to crystalline silicon solar cells where the limiting efficiency is found to be 29.8 percent under AM1.5, based on the measured optical absorption spectrum and published values of the Auger and free carrier absorption coefficients. The silicon is assumed to be textured for maximum benefit from light-trapping effects.",
"title": ""
},
{
"docid": "612cd1b5883fdb09dd9ace00174eb4fa",
"text": "Localization in indoor environment poses a fundamental challenge in ubiquitous computing compared to its well-established GPS-based outdoor environment counterpart. This study investigated the feasibility of a WiFi-based indoor positioning system to localize elderly in an elderly center focusing on their orientation. The fingerprinting method of Received Signal Strength Indication (RSSI) from WiFi Access Points (AP) has been employed to discriminate and uniquely identify a position. The discrimination process of the reference points with its orientation have been analyzed with 0.9, 1.8, and 2.7 meter resolution. The experimental result shows that the WiFi-based RSSI fingerprinting method can discriminate the location and orientation of a user within 1.8 meter resolution.",
"title": ""
},
{
"docid": "560b1d80377210ae6f60d375fa97560e",
"text": "We present the design and evaluation of a multi-articular soft exosuit that is portable, fully autonomous, and provides assistive torques to the wearer at the ankle and hip during walking. Traditional rigid exoskeletons can be challenging to perfectly align with a wearer’s biological joints and can have large inertias, which can lead to the wearer altering their natural motion patterns. Exosuits, in comparison, use textiles to create tensile forces over the body in parallel with the muscles, enabling them to be light and not restrict the wearer’s kinematics. We describe the biologically inspired design and function of our exosuit, including a simplified model of the suit’s architecture and its interaction with the body. A key feature of the exosuit is that it can generate forces passively due to the body’s motion, similar to the body’s ligaments and tendons. These passively-generated forces can be supplemented by actively contracting Bowden cables using geared electric motors, to create peak forces in the suit of up to 200N. We define the suit-human series stiffness as an important parameter in the design of the exosuit and measure it on several subjects, and we perform human subjects testing to determine the biomechanical and physiological effects of the suit. Results from a five-subject study showed a minimal effect on gait kinematics and an average best-case metabolic reduction of 6.4%, comparing suit worn unpowered vs powered, during loaded walking with 34.6kg of carried mass including the exosuit and actuators (2.0kg on both legs, 10.1kg total).",
"title": ""
},
{
"docid": "c514eb87b60db16abd139207d7d24a9d",
"text": "A technique called Time Hopping is proposed for speeding up reinforcement learning algorithms. It is applicable to continuous optimization problems running in computer simulations. Making shortcuts in time by hopping between distant states combined with off-policy reinforcement learning allows the technique to maintain higher learning rate. Experiments on a simulated biped crawling robot confirm that Time Hopping can accelerate the learning process more than seven times.",
"title": ""
},
{
"docid": "e1050f3c38f0b49893da4dd7722aff71",
"text": "The Berkeley lower extremity exoskeleton (BLEEX) is a load-carrying and energetically autonomous human exoskeleton that, in this first generation prototype, carries up to a 34 kg (75 Ib) payload for the pilot and allows the pilot to walk at up to 1.3 m/s (2.9 mph). This article focuses on the human-in-the-loop control scheme and the novel ring-based networked control architecture (ExoNET) that together enable BLEEX to support payload while safely moving in concert with the human pilot. The BLEEX sensitivity amplification control algorithm proposed here increases the closed loop system sensitivity to its wearer's forces and torques without any measurement from the wearer (such as force, position, or electromyogram signal). The tradeoffs between not having sensors to measure human variables, the need for dynamic model accuracy, and robustness to parameter uncertainty are described. ExoNET provides the physical network on which the BLEEX control algorithm runs. The ExoNET control network guarantees strict determinism, optimized data transfer for small data sizes, and flexibility in configuration. Its features and application on BLEEX are described",
"title": ""
},
{
"docid": "695c396f27ba31f15f7823511473925c",
"text": "Design and experimental analysis of beam steering in microstrip patch antenna array using dumbbell shaped Defected Ground Structure (DGS) for S-band (5.2 GHz) application was carried out in this study. The Phase shifting in antenna has been achieved using different size and position of dumbbell shape DGS. DGS has characteristics of slow wave, wide stop band and compact size. The obtained radiation pattern has provided steerable main lobe and nulls at predefined direction. The radiation pattern for different size and position of dumbbell structure in microstrip patch antenna array was measured and comparative study has been carried out.",
"title": ""
},
{
"docid": "90ef8ff57b2dac74a0e58c43c222b6c8",
"text": "The paper presents an overview of the research on teaching culture and describes effective pedagogical practices that can be integrated into the second language curriculum. Particularly, this overview tries to advance an approach for teaching culture and language through the theoretical construct of the 3Ps (Products, Practices, Perspectives), combined with an inquiry-based teaching approach utilizing instructional technology. This approach promotes student motivation and engagement that can help overcome past issues of stereotyping and lack of intercultural awareness. The authors summarize the research articles illustrating how teachers successfully integrate digital media together with inquiry learning into instruction to create a rich and meaningful environment in which students interact with authentic data and build their own understanding of a foreign culture’s products, practices, and perspectives. In addition, the authors review the articles that describe more traditional methods of teaching culture and demonstrate how they can be enhanced with technology. “The digital revolution is far more significant than the invention of writing or even of printing. It offers the potential for humans to learn new ways of thinking and organizing social structures.” Douglas Engelbard (1997) The advent of the Standards for Foreign Language Learning in the 21st Century (National Standards in Foreign Language Education Project, 1999) drew attention to the vital role of culture in language classrooms and defined culture as a fundamental part of the second language (L2) learning 5",
"title": ""
},
{
"docid": "b630a6b346edfb073c120cb70169b884",
"text": "Image tracing is a foundational component of the workflow in graphic design, engineering, and computer animation, linking hand-drawn concept images to collections of smooth curves needed for geometry processing and editing. Even for clean line drawings, modern algorithms often fail to faithfully vectorize junctions, or points at which curves meet; this produces vector drawings with incorrect connectivity. This subtle issue undermines the practical application of vectorization tools and accounts for hesitance among artists and engineers to use automatic vectorization software. To address this issue, we propose a novel image vectorization method based on state-of-the-art mathematical algorithms for frame field processing. Our algorithm is tailored specifically to disambiguate junctions without sacrificing quality.",
"title": ""
},
{
"docid": "19a538b6a49be54b153b0a41b6226d1f",
"text": "This paper presents a robot aimed to assist the shoulder movements of stroke patients during their rehabilitation process. This robot has the general form of an exoskeleton, but is characterized by an action principle on the patient no longer requiring a tedious and accurate alignment of the robot and patient's joints. It is constituted of a poly-articulated structure whose actuation is deported and transmission is ensured by Bowden cables. It manages two of the three rotational degrees of freedom (DOFs) of the shoulder. Quite light and compact, its proximal end can be rigidly fixed to the patient's back on a rucksack structure. As for its distal end, it is connected to the arm through passive joints and a splint guaranteeing the robot action principle, i.e. exert a force perpendicular to the patient's arm, whatever its configuration. This paper also presents a first prototype of this robot and some experimental results such as the arm angular excursions reached with the robot in the three joint planes.",
"title": ""
},
{
"docid": "c6ad70b8b213239b0dd424854af194e2",
"text": "The neural mechanisms underlying the processing of conventional and novel conceptual metaphorical sentences were examined with event-related potentials (ERPs). Conventional metaphors were created based on the Contemporary Theory of Metaphor and were operationally defined as familiar and readily interpretable. Novel metaphors were unfamiliar and harder to interpret. Using a sensicality judgment task, we compared ERPs elicited by the same target word when it was used to end anomalous, novel metaphorical, conventional metaphorical and literal sentences. Amplitudes of the N400 ERP component (320-440 ms) were more negative for anomalous sentences, novel metaphors, and conventional metaphors compared with literal sentences. Within a later window (440-560 ms), ERPs associated with conventional metaphors converged to the same level as literal sentences while the novel metaphors stayed anomalous throughout. The reported results were compatible with models assuming an initial stage for metaphor mappings from one concept to another and that these mappings are cognitively taxing.",
"title": ""
},
{
"docid": "03f0614b2479fd470eea5ef39c5a93f9",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: a r t i c l e i n f o a b s t r a c t Detailed land use/land cover classification at ecotope level is important for environmental evaluation. In this study, we investigate the possibility of using airborne hyperspectral imagery for the classification of ecotopes. In particular, we assess two tree-based ensemble classification algorithms: Adaboost and Random Forest, based on standard classification accuracy, training time and classification stability. Our results show that Adaboost and Random Forest attain almost the same overall accuracy (close to 70%) with less than 1% difference, and both outperform a neural network classifier (63.7%). Random Forest, however, is faster in training and more stable. Both ensemble classifiers are considered effective in dealing with hyperspectral data. Furthermore, two feature selection methods, the out-of-bag strategy and a wrapper approach feature subset selection using the best-first search method are applied. A majority of bands chosen by both methods concentrate between 1.4 and 1.8 μm at the early shortwave infrared region. Our band subset analyses also include the 22 optimal bands between 0.4 and 2.5 μm suggested in Thenkabail et al. (2004). Accuracy assessments of hyperspectral waveband performance for vegetation analysis applications. Remote Sensing of Environment, 91, 354–376.] due to similarity of the target classes. All of the three band subsets considered in this study work well with both classifiers as in most cases the overall accuracy dropped only by less than 1%. A subset of 53 bands is created by combining all feature subsets and comparing to using the entire set the overall accuracy is the same with Adaboost, and with Random Forest, a 0.2% improvement. The strategy to use a basket of band selection methods works better. Ecotopes belonging to the tree classes are in general classified better than the grass classes. Small adaptations of the classification scheme are recommended to improve the applicability of remote sensing method for detailed ecotope mapping. 1. Introduction Land use/land cover classification is a generic tool for environmental monitoring. To measure subtle changes in the ecosystem, a land use/land cover classification at ecotope level with definitive biological and ecological characteristics is needed. Ecotopes are distinct ecological landscape features …",
"title": ""
},
{
"docid": "5fb0931dafbb024663f2d68faca2f552",
"text": "The instrumentation and control (I&C) systems in nuclear power plants (NPPs) collect signals from sensors measuring plant parameters, integrate and evaluate sensor information, monitor plant performance, and generate signals to control plant devices for a safe operation of NPPs. Although the application of digital technology in industrial control systems (ICS) started a few decades ago, I&C systems in NPPs have utilized analog technology longer than any other industries. The reason for this stems from the fact that NPPs require strong assurance for safety and reliability. In recent years, however, digital I&C systems have been developed and installed in new and operating NPPs. This application of digital computers, and communication system and network technologies in NPP I&C systems accompanies cyber security concerns, similar to other critical infrastructures based on digital technologies. The Stuxnet case in 2010 evoked enormous concern regarding cyber security in NPPs. Thus, performing appropriate cyber security risk assessment for the digital I&C systems of NPPs, and applying security measures to the systems, has become more important nowadays. In general, approaches to assure cyber security in NPPs may be compatible with those for ICS and/or supervisory control and data acquisition (SCADA) systems in many aspects. Cyber security requirements and the risk assessment methodologies for ICS and SCADA systems are adopted from those for information technology (IT) systems. Many standards and guidance documents have been published for these areas [1~10]. Among them NIST SP 800-30 [4], NIST SP 800-37 [5], and NIST 800-39 [6] describe the risk assessment methods, NIST SP 800-53 [7] and NIST SP 800-53A [8] address security controls for IT systems. NIST SP 800-82 [10] describes the differences between IT systems and ICS and provides guidance for securing ICS, including SCADA systems, distributed control systems (DCS), and other systems performing control functions. As NIST SP 800-82 noted the differences between IT The applications of computers and communication system and network technologies in nuclear power plants have expanded recently. This application of digital technologies to the instrumentation and control systems of nuclear power plants brings with it the cyber security concerns similar to other critical infrastructures. Cyber security risk assessments for digital instrumentation and control systems have become more crucial in the development of new systems and in the operation of existing systems. Although the instrumentation and control systems of nuclear power plants are similar to industrial control systems, the former have specifications that differ from the latter in terms of architecture and function, in order to satisfy nuclear safety requirements, which need different methods for the application of cyber security risk assessment. In this paper, the characteristics of nuclear power plant instrumentation and control systems are described, and the considerations needed when conducting cyber security risk assessments in accordance with the lifecycle process of instrumentation and control systems are discussed. For cyber security risk assessments of instrumentation and control systems, the activities and considerations necessary for assessments during the system design phase or component design and equipment supply phase are presented in the following 6 steps: 1) System Identification and Cyber Security Modeling, 2) Asset and Impact Analysis, 3) Threat Analysis, 4) Vulnerability Analysis, 5) Security Control Design, and 6) Penetration test. The results from an application of the method to a digital reactor protection system are described.",
"title": ""
}
] | scidocsrr |
2db0bbb2917530f2d8fd0b82aece68d2 | Time and sample efficient discovery of Markov blankets and direct causal relations | [
{
"docid": "b5f8f310f2f4ed083b20f42446d27feb",
"text": "This paper provides algorithms that use an information-theoretic analysis to learn Bayesian network structures from data. Based on our three-phase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.",
"title": ""
},
{
"docid": "2dbd0d931c4c35fb4a7f24495b099fc9",
"text": "This paper presents a number of new algorithms for discovering the Markov Blanket of a target variable T from training data. The Markov Blanket can be used for variable selection for classification, for causal discovery, and for Bayesian Network learning. We introduce a low-order polynomial algorithm and several variants that soundly induce the Markov Blanket under certain broad conditions in datasets with thousands of variables and compare them to other state-of-the-art local and global methods with excel-",
"title": ""
}
] | [
{
"docid": "9cedfdbd91101763e8acad9cd4411faf",
"text": "In this paper, we describe current efforts towards interlinking music-related datasets on the Web. We first explain some initial interlinking experiences, and the poor results obtained by taking a näıve approach. We then detail a particular interlinking algorithm, taking into account both the similarities of web resources and of their neighbours. We detail the application of this algorithm in two contexts: to link a Creative Commons music dataset to an editorial one, and to link a personal music collection to corresponding web identifiers. The latter provides a user with personally meaningful entry points for exploring the web of data, and we conclude by describing some concrete tools built to generate and use such links.",
"title": ""
},
{
"docid": "135deb35cf3600cba8e791d604e26ffb",
"text": "Much of this book describes the algorithms behind search engines and information retrieval systems. By contrast, this chapter focuses on the human users of search systems, and the window through which search systems are seen: the search user interface. The role of the search user interface is to aid in the searcher's understanding and expression of their information needs, and to help users formulate their queries, select among available information sources, understand search results, and keep track of the progress of their search. In the first edition of this book, very little was known about what makes for an effective search interface. In the intervening years, much has become understood about which ideas work from a usability perspective, and which do not. This chapter briefly summarizes the state of the art of search interface design, both in terms of developments in academic research as well as in deployment in commercial systems. The sections that follow discuss how people search, search interfaces today, visualization in search interfaces, and the design and evaluation of search user interfaces. Search tasks range from the relatively simple (e.g., looking up disputed facts or finding weather information) to the rich and complex (e.g., job seeking and planning vacations). Search interfaces should support a range of tasks, while taking into account how people think about searching for information. This section summarizes theoretical models about and empirical observations of the process of online information seeking. Information Lookup versus Exploratory Search User interaction with search interfaces differs depending on the type of task, the amount of time and effort available to invest in the process, and the domain expertise of the information seeker. The simple interaction dialogue used in Web search engines is most appropriate for finding answers to questions or to finding Web sites or other resources that act as search starting points. But, as Marchionini [89] notes, the \" turn-taking \" interface of Web search engines is inherently limited and is many cases is being supplanted by speciality search engines – such as for travel and health information – that offer richer interaction models. Marchionini [89] makes a distinction between information lookup and exploratory search. Lookup tasks are akin to fact retrieval or question answering, and are satisfied by short, discrete pieces of information: numbers, dates, names, or names of files or Web sites. Standard Web search interactions (as well as standard database management system queries) can …",
"title": ""
},
{
"docid": "1ce8e79e7fe4761858b3e83c49b80c80",
"text": "Taking the concept of thin clients to the limit, this paper proposes that desktop machines should just be simple, stateless I/O devices (display, keyboard, mouse, etc.) that access a shared pool of computational resources over a dedicated interconnection fabric --- much in the same way as a building's telephone services are accessed by a collection of handset devices. The stateless desktop design provides a useful mobility model in which users can transparently resume their work on any desktop console.This paper examines the fundamental premise in this system design that modern, off-the-shelf interconnection technology can support the quality-of-service required by today's graphical and multimedia applications. We devised a methodology for analyzing the interactive performance of modern systems, and we characterized the I/O properties of common, real-life applications (e.g. Netscape, streaming video, and Quake) executing in thin-client environments. We have conducted a series of experiments on the Sun Ray™ 1 implementation of this new system architecture, and our results indicate that it provides an effective means of delivering computational services to a workgroup.We have found that response times over a dedicated network are so low that interactive performance is indistinguishable from a dedicated workstation. A simple pixel encoding protocol requires only modest network resources (as little as a 1Mbps home connection) and is quite competitive with the X protocol. Tens of users running interactive applications can share a processor without any noticeable degradation, and many more can share the network. The simple protocol over a 100Mbps interconnection fabric can support streaming video and Quake at display rates and resolutions which provide a high-fidelity user experience.",
"title": ""
},
{
"docid": "b2a895a5f7f8455888f61fd64d7ea367",
"text": "Widespread adoption of hydrogen as a vehicular fuel depends critically upon the ability to store hydrogen on-board at high volumetric and gravimetric densities, as well as on the ability to extract/insert it at sufficiently rapid rates. As current storage methods based on physical means--high-pressure gas or (cryogenic) liquefaction--are unlikely to satisfy targets for performance and cost, a global research effort focusing on the development of chemical means for storing hydrogen in condensed phases has recently emerged. At present, no known material exhibits a combination of properties that would enable high-volume automotive applications. Thus new materials with improved performance, or new approaches to the synthesis and/or processing of existing materials, are highly desirable. In this critical review we provide a practical introduction to the field of hydrogen storage materials research, with an emphasis on (i) the properties necessary for a viable storage material, (ii) the computational and experimental techniques commonly employed in determining these attributes, and (iii) the classes of materials being pursued as candidate storage compounds. Starting from the general requirements of a fuel cell vehicle, we summarize how these requirements translate into desired characteristics for the hydrogen storage material. Key amongst these are: (a) high gravimetric and volumetric hydrogen density, (b) thermodynamics that allow for reversible hydrogen uptake/release under near-ambient conditions, and (c) fast reaction kinetics. To further illustrate these attributes, the four major classes of candidate storage materials--conventional metal hydrides, chemical hydrides, complex hydrides, and sorbent systems--are introduced and their respective performance and prospects for improvement in each of these areas is discussed. Finally, we review the most valuable experimental and computational techniques for determining these attributes, highlighting how an approach that couples computational modeling with experiments can significantly accelerate the discovery of novel storage materials (155 references).",
"title": ""
},
{
"docid": "d97af6f656cba4018a5d367861a07f01",
"text": "Traditional Cloud model is not designed to handle latency-sensitive Internet of Things applications. The new trend consists on moving data to be processed close to where it was generated. To this end, Fog Computing paradigm suggests using the compute and storage power of network elements. In such environments, intelligent and scalable orchestration of thousands of heterogeneous devices in complex environments is critical for IoT Service providers. In this vision paper, we present a framework, called Foggy, that facilitates dynamic resource provisioning and automated application deployment in Fog Computing architectures. We analyze several applications and identify their requirements that need to be taken intoconsideration in our design of the Foggy framework. We implemented a proof of concept of a simple IoT application continuous deployment using Raspberry Pi boards.",
"title": ""
},
{
"docid": "5b6bf9ee0fed37b20d4b3607717d2f77",
"text": "In order to understand the organization of the cerebral cortex, it is necessary to create a map or parcellation of cortical areas. Reconstructions of the cortical surface created from structural MRI scans, are frequently used in neuroimaging as a common coordinate space for representing multimodal neuroimaging data. These meshes are used to investigate healthy brain organization as well as abnormalities in neurological and psychiatric conditions. We frame cerebral cortex parcellation as a mesh segmentation task, and address it by taking advantage of recent advances in generalizing convolutions to the graph domain. In particular, we propose to assess graph convolutional networks and graph attention networks, which, in contrast to previous mesh parcellation models, exploit the underlying structure of the data to make predictions. We show experimentally on the Human Connectome Project dataset that the proposed graph convolutional models outperform current state-ofthe-art and baselines, highlighting the potential and applicability of these methods to tackle neuroimaging challenges, paving the road towards a better characterization of brain diseases.",
"title": ""
},
{
"docid": "1e3d8e4d78052cfccc2f23dadcfa841b",
"text": "OBJECTIVE\nAlthough the underlying cause of Huntington's disease (HD) is well established, the actual pathophysiological processes involved remain to be fully elucidated. In other proteinopathies such as Alzheimer's and Parkinson's diseases, there is evidence for impairments of the cerebral vasculature as well as the blood-brain barrier (BBB), which have been suggested to contribute to their pathophysiology. We investigated whether similar changes are also present in HD.\n\n\nMETHODS\nWe used 3- and 7-Tesla magnetic resonance imaging as well as postmortem tissue analyses to assess blood vessel impairments in HD patients. Our findings were further investigated in the R6/2 mouse model using in situ cerebral perfusion, histological analysis, Western blotting, as well as transmission and scanning electron microscopy.\n\n\nRESULTS\nWe found mutant huntingtin protein (mHtt) aggregates to be present in all major components of the neurovascular unit of both R6/2 mice and HD patients. This was accompanied by an increase in blood vessel density, a reduction in blood vessel diameter, as well as BBB leakage in the striatum of R6/2 mice, which correlated with a reduced expression of tight junction-associated proteins and increased numbers of transcytotic vesicles, which occasionally contained mHtt aggregates. We confirmed the existence of similar vascular and BBB changes in HD patients.\n\n\nINTERPRETATION\nTaken together, our results provide evidence for alterations in the cerebral vasculature in HD leading to BBB leakage, both in the R6/2 mouse model and in HD patients, a phenomenon that may, in turn, have important pathophysiological implications.",
"title": ""
},
{
"docid": "561e9f599e5dc470ca6f57faa62ebfce",
"text": "Rapid learning requires flexible representations to quickly adopt to new evidence. We develop a novel class of models called Attentive Recurrent Comparators (ARCs) that form representations of objects by cycling through them and making observations. Using the representations extracted by ARCs, we develop a way of approximating a dynamic representation space and use it for oneshot learning. In the task of one-shot classification on the Omniglot dataset, we achieve the state of the art performance with an error rate of 1.5%. This represents the first super-human result achieved for this task with a generic model that uses only pixel information.",
"title": ""
},
{
"docid": "dc9a168fb4c586650b8f11cb5cdd725c",
"text": "Neurolinguistic accounts of sentence comprehension identify a network of relevant brain regions, but do not detail the information flowing through them. We investigate syntactic information. Does brain activity implicate a computation over hierarchical grammars or does it simply reflect linear order, as in a Markov chain? To address this question, we quantify the cognitive states implied by alternative parsing models. We compare processing-complexity predictions from these states against fMRI timecourses from regions that have been implicated in sentence comprehension. We find that hierarchical grammars independently predict timecourses from left anterior and posterior temporal lobe. Markov models are predictive in these regions and across a broader network that includes the inferior frontal gyrus. These results suggest that while linear effects are wide-spread across the language network, certain areas in the left temporal lobe deal with abstract, hierarchical syntactic representations.",
"title": ""
},
{
"docid": "4463a242a313f82527c4bdfff3d3c13c",
"text": "This paper examines the impact of capital structure on financial performance of Nigerian firms using a sample of thirty non-financial firms listed on the Nigerian Stock Exchange during the seven year period, 2004 – 2010. Panel data for the selected firms were generated and analyzed using ordinary least squares (OLS) as a method of estimation. The result shows that a firm’s capita structure surrogated by Debt Ratio, Dr has a significantly negative impact on the firm’s financial measures (Return on Asset, ROA, and Return on Equity, ROE). The study of these findings, indicate consistency with prior empirical studies and provide evidence in support of Agency cost theory.",
"title": ""
},
{
"docid": "e8b0536f5d749b5f6f5651fe69debbe1",
"text": "Current centralized cloud datacenters provide scalable computation- and storage resources in a virtualized infrastructure and employ a use-based \"pay-as-you-go\" model. But current mobile devices and their resource-hungry applications (e.g., Speech-or face recognition) demand for these resources on the spot, though a mobile device's intrinsic characteristic is its limited availability of resources (e.g., CPU, storage, bandwidth, energy). Thus, mobile cloud computing (MCC) was introduced to overcome these limitations by transparently making accessible the apparently infinite cloud resources to the mobile devices and by allowing mobile applications to (elastically) expand into the cloud. However, MCC often relies on a stable and fast connection to the mobile devices' surrogate in the cloud, which is a rare case in mobile scenarios. Moreover, the increased latency and the limited bandwidth prevent the use of real-time applications like, e.g. Cloud gaming. Instead, mobile edge computing (MEC) or fog computing tries to provide the necessary resources at the logical edge of the network by including infrastructure components to create ad-hoc mobile clouds. However, this approach requires the replication and management of the applications' business logic in an untrusted, unreliable and constantly changing environment. Consequently, this paper presents a novel approach to allow mobile app developers to easily benefit from the features of MEC. In particular, we present a programming model and framework that directly fit the common app developers' mindset to design elastic and scalable edge-based mobile applications.",
"title": ""
},
{
"docid": "ddd7aaa70841b172b4dc58263cc8a94e",
"text": "Fingerprint-spoofing attack often occurs when imposters gain access illegally by using artificial fingerprints, which are made of common fingerprint materials, such as silicon, latex, etc. Thus, to protect our privacy, many fingerprint liveness detection methods are put forward to discriminate fake or true fingerprint. Current work on liveness detection for fingerprint images is focused on the construction of complex handcrafted features, but these methods normally destroy or lose spatial information between pixels. Different from existing methods, convolutional neural network (CNN) can generate high-level semantic representations by learning and concatenating low-level edge and shape features from a large amount of labeled data. Thus, CNN is explored to solve the above problem and discriminate true fingerprints from fake ones in this paper. To reduce the redundant information and extract the most distinct features, ROI and PCA operations are performed for learned features of convolutional layer or pooling layer. After that, the extracted features are fed into SVM classifier. Experimental results based on the LivDet (2013) and the LivDet (2011) datasets, which are captured by using different fingerprint materials, indicate that the classification performance of our proposed method is both efficient and convenient compared with the other previous methods.",
"title": ""
},
{
"docid": "3508e1a4a4c04127792268509c1f572d",
"text": "In this paper predictions of the Normalized Difference Vegetation Index (NDVI) data recorded by satellites over Ventspils Municipality in Courland, Latvia are discussed. NDVI is an important variable for vegetation forecasting and management of various problems, such as climate change monitoring, energy usage monitoring, managing the consumption of natural resources, agricultural productivity monitoring, drought monitoring and forest fire detection. Artificial Neural Networks (ANN) are computational models and universal approximators, which are widely used for nonlinear, non-stationary and dynamical process modeling and forecasting. In this paper Elman Recurrent Neural Networks (ERNN) are used to make one-step-ahead prediction of univariate NDVI time series.",
"title": ""
},
{
"docid": "9a82f33d84cd622ccd66a731fc9755de",
"text": "To discover relationships and associations between pairs of variables in large data sets have become one of the most significant challenges for bioinformatics scientists. To tackle this problem, maximal information coefficient (MIC) is widely applied as a measure of the linear or non-linear association between two variables. To improve the performance of MIC calculation, in this work we present MIC++, a parallel approach based on the heterogeneous accelerators including Graphic Processing Unit (GPU) and Field Programmable Gate Array (FPGA) engines, focusing on both coarse-grained and fine-grained parallelism. As the evaluation of MIC++, we have demonstrated the performance on the state-of-the-art GPU accelerators and the FPGA-based accelerators. Preliminary estimated results show that the proposed parallel implementation can significantly achieve more than 6X-14X speedup using GPU, and 4X-13X using FPGA-based accelerators.",
"title": ""
},
{
"docid": "36a1e7716d6cdac89911ca0b52c019ff",
"text": "Some recent sequence-to-sequence models like the Transformer (Vaswani et al., 2017) can score all output posiQons in parallel. We propose a simple algorithmic technique that exploits this property to generate mulQple tokens in parallel at decoding Qme with liTle to no loss in quality. Our fastest models exhibit wall-clock speedups of up to 4x over standard greedy decoding on the tasks of machine translaQon and image super-resoluQon.",
"title": ""
},
{
"docid": "46fb354d3c85325312fe4e03d998632c",
"text": "Driver distraction has been identified as a highpriority topic by the National Highway Traffic Safety Administration, reflecting concerns about the compatibility of certain in-vehicle technologies with the driving task, whether drivers are making potentially dangerous decisions about when to interact with invehicle technologies while driving, and that these trends may accelerate as new technologies continue to become available. Since 1991, NHTSA has conducted research to understand the factors that contribute to driver distraction and to develop methods to assess the extent to which in-vehicle technologies may contribute to crashes. This paper summarizes significant findings from past NHTSA research in the area of driver distraction and workload, provides an overview of current ongoing research, and describes upcoming research that will be conducted, including research using the National Advanced Driving Simulator and work to be conducted at NHTSA’s Vehicle Research and Test Center. Preliminary results of the ongoing research are also presented.",
"title": ""
},
{
"docid": "90e254138a5912daf0650f5ad794743c",
"text": "Large scale graph processing represents an interesting challenge due to the lack of locality. This paper presents PathGraph for improving iterative graph computation on graphs with billions of edges. Our system design has three unique features: First, we model a large graph using a collection of tree-based partitions and use an path-centric computation rather than vertex-centric or edge-centric computation. Our parallel computation model significantly improves the memory and disk locality for performing iterative computation algorithms. Second, we design a compact storage that further maximize sequential access and minimize random access on storage media. Third, we implement the path-centric computation model by using a scatter/gather programming model, which parallels the iterative computation at partition tree level and performs sequential updates for vertices in each partition tree. The experimental results show that the path-centric approach outperforms vertex-centric and edge-centric systems on a number of graph algorithms for both in-memory and out-of-core graphs.",
"title": ""
},
{
"docid": "252ecad359768520837f34e9c3e8b647",
"text": "Topic area focus. As part of the Southwestern Regional Educational Laboratory’s (REL Southwest) fast-turnaround projects, the American Institutes for Research (AIR) will conduct a systematic review of research-based evidence on the effects of professional development on growth in student learning. The main focus of the review will be how students’ achievement in three core academic subjects (English/language arts/reading, mathematics, and science) is affected by professional development activities that are designed to enhance K–12 teachers’ knowledge and skills and to transform their classroom practices. A basic assumption of this review is that the effects of professional development on student achievement are mediated by increased teacher knowledge and improved teaching in the classroom (see appendix B, figure B.1). Existing literature reviews (Loucks-Horsley & Matsumoto, 1999; Supovitz, 2001) indicate that the volume of literature on the effect of professional development on student learning is thinner than that on the effects of professional development on teacher learning and classroom teaching practices. Therefore, we expect that our literature search will turn up existing studies on the effects of professional development on teacher learning and teaching practice (but which fall short of demonstrating its effect on student achievement), as well as those that take the next step and address the link between professional development and student outcomes. Our tally of excluded studies will be the means by which we document the paucity of research that directly examines the effect of professional development on student achievement. This systematic review of evidence will address the following research questions: What is the impact of providing professional • development to teachers on student achievement? If a sufficient number of studies remain in the final pool, we will also try to disaggregate the results to answer: Does the effect of teacher professional • development on student achievement vary by type of professional development provided (for example, summer institutes, workshops, online training)? Does the effect of teacher professional de• velopment on student achievement vary by content domain (English/language arts, mathematics, science)? Does the effect of teacher professional de• velopment on student achievement vary by grade level (elementary, secondary)? General inclusion criteria Populations to be included. Target populations for this review include the students of K–12 teachers of English/language arts/reading, mathematics, and science. Although we would like to be able to examine how the effect of teacher professional development on student achievement varies by student characteristics (for example, English language learners, economically disadvantaged students, students with disabilities), we do not expect to find many studies that directly address student outcomes, which are distal effects of professional development given to teachers. If our final review pool contains studies that allow for this disaggregation, we will include those findings in the final report. Types of professional development to be included. The No Child Left Behind provisions shed light on 30 reviewing the evidence On hOw teacher prOfeSSiOnal develOpment affectS Student achievement what constitutes professional development (see appendix C for detailed definitions). It encompasses a wide range of activities that are designed to provide teachers with opportunities to deepen their knowledge in the subject matter that they teach, improve teaching skills, and better understand how students learn and think. Therefore, we take an inclusive view on the form and substance of professional development (Kennedy, 1998). A variety of forms (format and structure) and substances (content and purpose) of professional development will be considered for the inclusion of review as long as they are designed to assist teachers of English/language arts/reading, mathematics, and science to achieve their desired goals for enhancing student achievement outcomes. The substance of professional development • may include combinations of the following areas: Research-based reform models, curri• cula, instructional strategies and models, or materials (for example, Cognitively Guided Instruction, America’s Choice, Open Court, Success for All) Content knowledge (for example, phone• mic awareness, algebraic concepts, use of manipulatives, conservation) Pedagogical content knowledge of a • particular subject: knowledge about how students learn a particular subject and understanding of student thinking Generic instructional strategies or teach• ing skills that are applicable to any subject (for example, differentiated instruction, cooperative learning, and reciprocal learning); this may include such special topics as classroom management, use of assessment data, alignment of instruction with standards, and teaching students with special needs in learning English, mathematics, or science (for example, English language learners and students with disabilities). The form of professional development to be • included in the review may involve: Traditional types of professional devel• opment such as workshops, summer institutes, and conferences. Reform types of professional develop• ment, such as coaching and mentoring, that are embedded in teachers’ classroom teaching. Online professional development such • as online courses, web-based teaching modules, or virtual teacher-learning communities. Types of research studies to be included. Our review of professional development literature focuses on studies that involve student learning in reading, mathematics, and science in grades K–12. To be included in the review, a study must meet several relevancy criteria: Topic. • The study has to deal with professional development applied to teaching in reading, mathematics, and science. The study is required to focus on the effects of teachers’ inservice professional development on student learning. Hence, this review does not include studies that are primarily focused on: Effects of pre-service teacher preparation • on student learning. Effects of teacher quality in general on • student achievement. Effects of comprehensive reform models, • curricula, instructional models, materials, and assessment on student achievement, with little attention to professional development (for example, teacher",
"title": ""
},
{
"docid": "67321b9f13fe260e8365efe5c9ce878d",
"text": "It will take a lot of conversation to make data science work. Data scientists can't do it on their own. Success in data science requires a multiskilled project team with data scientists and domain experts working closely together.",
"title": ""
}
] | scidocsrr |
494618e843cad4d38743b862d5b3d3a7 | Measuring the Lifetime Value of Customers Acquired from Google Search Advertising | [
{
"docid": "bfe762fc6e174778458b005be75d8285",
"text": "The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed istribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a randomeffects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.",
"title": ""
}
] | [
{
"docid": "5b9488755fb3146adf5b6d8d767b7c8f",
"text": "This paper presents an overview of our activities for spoken and written language resources for Vietnamese implemented at CLIPSIMAG Laboratory and International Research Center MICA. A new methodology for fast text corpora acquisition for minority languages which has been applied to Vietnamese is proposed. The first results of a process of building a large Vietnamese speech database (VNSpeechCorpus) and a phonetic dictionary, which is used for automatic alignment process, are also presented.",
"title": ""
},
{
"docid": "bda892eb6cdcc818284f56b74c932072",
"text": "In this paper, a low power and low jitter 12-bit CMOS digitally controlled oscillator (DCO) design is presented. The CMOS DCO is designed based on a ring oscillator implemented with Schmitt trigger based inverters. Simulations of the proposed DCO using 32 nm CMOS predictive transistor model (PTM) achieves controllable frequency range of 570 MHz~850 MHz with a wide linearity. Monte Carlo simulation demonstrates that the time-period jitter due to random power supply fluctuation is under 75 ps and the power consumption is 2.3 mW at 800 MHz with 0.9 V power supply.",
"title": ""
},
{
"docid": "24d0d2a384b2f9cefc6e5162cdc52c45",
"text": "Food classification from images is a fine-grained classification problem. Manual curation of food images is cost, time and scalability prohibitive. On the other hand, web data is available freely but contains noise. In this paper, we address the problem of classifying food images with minimal data curation. We also tackle a key problems with food images from the web where they often have multiple cooccuring food types but are weakly labeled with a single label. We first demonstrate that by sequentially adding a few manually curated samples to a larger uncurated dataset from two web sources, the top-1 classification accuracy increases from 50.3% to 72.8%. To tackle the issue of weak labels, we augment the deep model with Weakly Supervised learning (WSL) that results in an increase in performance to 76.2%. Finally, we show some qualitative results to provide insights into the performance improvements using the proposed ideas.",
"title": ""
},
{
"docid": "723f7d157cacfcad4523f7544a9d1c77",
"text": "The superiority of deeply learned pedestrian representations has been reported in very recent literature of person re-identification (re-ID). In this article, we consider the more pragmatic issue of learning a deep feature with no or only a few labels. We propose a progressive unsupervised learning (PUL) method to transfer pretrained deep representations to unseen domains. Our method is easy to implement and can be viewed as an effective baseline for unsupervised re-ID feature learning. Specifically, PUL iterates between (1) pedestrian clustering and (2) fine-tuning of the convolutional neural network (CNN) to improve the initialization model trained on the irrelevant labeled dataset. Since the clustering results can be very noisy, we add a selection operation between the clustering and fine-tuning. At the beginning, when the model is weak, CNN is fine-tuned on a small amount of reliable examples that locate near to cluster centroids in the feature space. As the model becomes stronger, in subsequent iterations, more images are being adaptively selected as CNN training samples. Progressively, pedestrian clustering and the CNN model are improved simultaneously until algorithm convergence. This process is naturally formulated as self-paced learning. We then point out promising directions that may lead to further improvement. Extensive experiments on three large-scale re-ID datasets demonstrate that PUL outputs discriminative features that improve the re-ID accuracy. Our code has been released at https://github.com/hehefan/Unsupervised-Person-Re-identification-Clustering-and-Fine-tuning.",
"title": ""
},
{
"docid": "faf83822de9f583bebc120aecbcd107a",
"text": "Relapsed B-cell lymphomas are incurable with conventional chemotherapy and radiation therapy, although a fraction of patients can be cured with high-dose chemoradiotherapy and autologous stemcell transplantation (ASCT). We conducted a phase I/II trial to estimate the maximum tolerated dose (MTD) of iodine 131 (131I)–tositumomab (anti-CD20 antibody) that could be combined with etoposide and cyclophosphamide followed by ASCT in patients with relapsed B-cell lymphomas. Fifty-two patients received a trace-labeled infusion of 1.7 mg/kg 131Itositumomab (185-370 MBq) followed by serial quantitative gamma-camera imaging and estimation of absorbed doses of radiation to tumor sites and normal organs. Ten days later, patients received a therapeutic infusion of 1.7 mg/kg tositumomab labeled with an amount of 131I calculated to deliver the target dose of radiation (20-27 Gy) to critical normal organs (liver, kidneys, and lungs). Patients were maintained in radiation isolation until their total-body radioactivity was less than 0.07 mSv/h at 1 m. They were then given etoposide and cyclophosphamide followed by ASCT. The MTD of 131Itositumomab that could be safely combined with 60 mg/kg etoposide and 100 mg/kg cyclophosphamide delivered 25 Gy to critical normal organs. The estimated overall survival (OS) and progressionfree survival (PFS) of all treated patients at 2 years was 83% and 68%, respectively. These findings compare favorably with those in a nonrandomized control group of patients who underwent transplantation, external-beam total-body irradiation, and etoposide and cyclophosphamide therapy during the same period (OS of 53% and PFS of 36% at 2 years), even after adjustment for confounding variables in a multivariable analysis. (Blood. 2000;96:2934-2942)",
"title": ""
},
{
"docid": "6838d497f81c594cb1760c075b0f5d48",
"text": "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson $x^{2}$ divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We train LSGANs on several datasets, and the experimental results show that the images generated by LSGANs are of better quality than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. The other one is to compare between LSGANs with gradient penalty and WGANs with gradient penalty. We conduct four experiments to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.",
"title": ""
},
{
"docid": "ea1d408c4e4bfe69c099412da30949b0",
"text": "The amount of scientific papers in the Molecular Biology field has experienced an enormous growth in the last years, prompting the need of developing automatic Information Extraction (IE) systems. This work is a first step towards the ontology-based domain-independent generalization of a system that identifies Escherichia coli regulatory networks. First, a domain ontology based on the RegulonDB database was designed and populated. After that, the steps of the existing IE system were generalized to use the knowledge contained in the ontology, so that it could be potentially applied to other domains. The resulting system has been tested both with abstract and full articles that describe regulatory interactions for E. coli, obtaining satisfactory results. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4b94082787aed8e947ae798b74bdd552",
"text": "AIM\nThe aim of the study was to determine the prevalence of high anxiety and substance use among university students in the Republic of Macedonia.\n\n\nMATERIAL AND METHODS\nThe sample comprised 742 students, aged 18-22 years, who attended the first (188 students) and second year studies at the Medical Faculty (257), Faculty of Dentistry (242), and Faculty of Law (55) within Ss. Cyril and Methodius University in Skopje. As a psychometric test the Beck Anxiety Inventory (BAI) was used. It is a self-rating questionnaire used for measuring the severity of anxiety. A psychiatric interview was performed with students with BAI scores > 25. A self-administered questionnaire consisted of questions on the habits of substance (alcohol, nicotine, sedative-hypnotics, and illicit drugs) use and abuse was also used. For statistical evaluation Statistica 7 software was used.\n\n\nRESULTS\nThe highest mean BAI scores were obtained by first year medical students (16.8 ± 9.8). Fifteen percent of all students and 20% of first year medical students showed high levels of anxiety. Law students showed the highest prevalence of substance use and abuse.\n\n\nCONCLUSION\nHigh anxiety and substance use as maladaptive behaviours among university students are not systematically investigated in our country. The study showed that students show these types of unhealthy reactions, regardless of the curriculum of education. More attention should be paid to students in the early stages of their education. A student counselling service which offers mental health assistance needs to be established within University facilities in R. Macedonia alongside the existing services in our health system.",
"title": ""
},
{
"docid": "d1525fdab295a16d5610210e80fb8104",
"text": "The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications.",
"title": ""
},
{
"docid": "7884c51de6f53d379edccac50fd55caa",
"text": "Objective. We analyze the process of changing ethical attitudes over time by focusing on a specific set of ‘‘natural experiments’’ that occurred over an 18-month period, namely, the accounting scandals that occurred involving Enron/Arthur Andersen and insider-trader allegations related to ImClone. Methods. Given the amount of media attention devoted to these ethical scandals, we test whether respondents in a cross-sectional sample taken over 18 months become less accepting of ethically charged vignettes dealing with ‘‘accounting tricks’’ and ‘‘insider trading’’ over time. Results. We find a significant and gradual decline in the acceptance of the vignettes over the 18-month period. Conclusions. Findings presented here may provide valuable insight into potential triggers of changing ethical attitudes. An intriguing implication of these results is that recent highly publicized ethical breaches may not be only a symptom, but also a cause of changing attitudes.",
"title": ""
},
{
"docid": "8d208bb5318dcbc5d941df24906e121f",
"text": "Applications based on eye-blink detection have increased, as a result of which it is essential for eye-blink detection to be robust and non-intrusive irrespective of the changes in the user's facial pose. However, most previous studies on camera-based blink detection have the disadvantage that their performances were affected by the facial pose. They also focused on blink detection using only frontal facial images. To overcome these disadvantages, we developed a new method for blink detection, which maintains its accuracy despite changes in the facial pose of the subject. This research is novel in the following four ways. First, the face and eye regions are detected by using both the AdaBoost face detector and a Lucas-Kanade-Tomasi (LKT)-based method, in order to achieve robustness to facial pose. Secondly, the determination of the state of the eye (being open or closed), needed for blink detection, is based on two features: the ratio of height to width of the eye region in a still image, and the cumulative difference of the number of black pixels of the eye region using an adaptive threshold in successive images. These two features are robustly extracted irrespective of the lighting variations by using illumination normalization. Thirdly, the accuracy of determining the eye state - open or closed - is increased by combining the above two features on the basis of the support vector machine (SVM). Finally, the SVM classifier for determining the eye state is adaptively selected according to the facial rotation. Experimental results using various databases showed that the blink detection by the proposed method is robust to various facial poses.",
"title": ""
},
{
"docid": "584de328ade02c34e36e2006f3e66332",
"text": "The HP-ASD technology has experienced a huge development in the last decade. This can be appreciated by the large number of recently introduced drive configurations on the market. In addition, many industrial applications are reaching MV operation and megawatt range or have experienced changes in requirements on efficiency, performance, and power quality, making the use of HP-ASDs more attractive. It can be concluded that, HP-ASDs is an enabling technology ready to continue powering the future of industry for the decades to come.",
"title": ""
},
{
"docid": "7aa6b9cb3a7a78ec26aff130a1c9015a",
"text": "As critical infrastructures in the Internet, data centers have evolved to include hundreds of thousands of servers in a single facility to support dataand/or computing-intensive applications. For such large-scale systems, it becomes a great challenge to design an interconnection network that provides high capacity, low complexity, low latency and low power consumption. The traditional approach is to build a hierarchical packet network using switches and routers. This approach suffers from limited scalability in the aspects of power consumption, wiring and control complexity, and delay caused by multi-hop store-andforwarding. In this paper we tackle the challenge by designing a novel switch architecture that supports direct interconnection of huge number of server racks and provides switching capacity at the level of Petabit/s. Our design combines the best features of electronics and optics. Exploiting recent advances in optics, we propose to build a bufferless optical switch fabric that includes interconnected arrayed waveguide grating routers (AWGRs) and tunable wavelength converters (TWCs). The optical fabric is integrated with electronic buffering and control to perform highspeed switching with nanosecond-level reconfiguration overhead. In particular, our architecture reduces the wiring complexity from O(N) to O(sqrt(N)). We design a practical and scalable scheduling algorithm to achieve high throughput under various traffic load. We also discuss implementation issues to justify the feasibility of this design. Simulation shows that our design achieves good throughput and delay performance.",
"title": ""
},
{
"docid": "ef9cea211dfdc79f5044a0da606bafb5",
"text": "Gender identity disorder (GID) refers to transsexual individuals who feel that their assigned biological gender is incongruent with their gender identity and this cannot be explained by any physical intersex condition. There is growing scientific interest in the last decades in studying the neuroanatomy and brain functions of transsexual individuals to better understand both the neuroanatomical features of transsexualism and the background of gender identity. So far, results are inconclusive but in general, transsexualism has been associated with a distinct neuroanatomical pattern. Studies mainly focused on male to female (MTF) transsexuals and there is scarcity of data acquired on female to male (FTM) transsexuals. Thus, our aim was to analyze structural MRI data with voxel based morphometry (VBM) obtained from both FTM and MTF transsexuals (n = 17) and compare them to the data of 18 age matched healthy control subjects (both males and females). We found differences in the regional grey matter (GM) structure of transsexual compared with control subjects, independent from their biological gender, in the cerebellum, the left angular gyrus and in the left inferior parietal lobule. Additionally, our findings showed that in several brain areas, regarding their GM volume, transsexual subjects did not differ significantly from controls sharing their gender identity but were different from those sharing their biological gender (areas in the left and right precentral gyri, the left postcentral gyrus, the left posterior cingulate, precuneus and calcarinus, the right cuneus, the right fusiform, lingual, middle and inferior occipital, and inferior temporal gyri). These results support the notion that structural brain differences exist between transsexual and healthy control subjects and that majority of these structural differences are dependent on the biological gender.",
"title": ""
},
{
"docid": "459a3bc8f54b8f7ece09d5800af7c37b",
"text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.",
"title": ""
},
{
"docid": "f740191f7c6d27811bb09bf40e8da021",
"text": "Collaboration Engineering is an approach for the design and deployment of repeatable collaboration processes that can be executed by practitioners without the support of collaboration professionals such as facilitators. A critical challenge in Collaboration Engineering concerns how the design activities have to be executed and which design choices have to be made to create a process design. We report on a four year design science study, in which we developed a design approach for Collaboration Engineering that",
"title": ""
},
{
"docid": "af1ddb07f08ad6065c004edae74a3f94",
"text": "Human decisions are prone to biases, and this is no less true for decisions made within data visualizations. Bias mitigation strategies often focus on the person, by educating people about their biases, typically with little success. We focus instead on the system, presenting the first evidence that altering the design of an interactive visualization tool can mitigate a strong bias – the attraction effect. Participants viewed 2D scatterplots where choices between superior alternatives were affected by the placement of other suboptimal points. We found that highlighting the superior alternatives weakened the bias, but did not eliminate it. We then tested an interactive approach where participants completely removed locally dominated points from the view, inspired by the elimination by aspects strategy in the decision-making literature. This approach strongly decreased the bias, leading to a counterintuitive suggestion: tools that allow removing inappropriately salient or distracting data from a view may help lead users to make more rational decisions.",
"title": ""
},
{
"docid": "b141c5a1b7a92856b9dc3e3958a91579",
"text": "Field-programmable analog arrays (FPAAs) provide a method for rapidly prototyping analog systems. Currently available commercial and academic FPAAs are typically based on operational amplifiers (or other similar analog primitives) with only a few computational elements per chip. While their specific architectures vary, their small sizes and often restrictive interconnect designs leave current FPAAs limited in functionality and flexibility. For FPAAs to enter the realm of large-scale reconfigurable devices such as modern field-programmable gate arrays (FPGAs), new technologies must be explored to provide area-efficient accurately programmable analog circuitry that can be easily integrated into a larger digital/mixed-signal system. Recent advances in the area of floating-gate transistors have led to a core technology that exhibits many of these qualities, and current research promises a digitally controllable analog technology that can be directly mated to commercial FPGAs. By leveraging these advances, a new generation of FPAAs is introduced in this paper that will dramatically advance the current state of the art in terms of size, functionality, and flexibility. FPAAs have been fabricated using floating-gate transistors as the sole programmable element, and the results of characterization and system-level experiments on the most recent FPAA are shown.",
"title": ""
},
{
"docid": "3dcce7058de4b41ad3614561832448a4",
"text": "Declarative models play an important role in most software design activities, by allowing designs to be constructed that selectively abstract over complex implementation details. In the user interface setting, Model-Based User Interface Development Environments (MB-UIDEs) provide a context within which declarative models can be constructed and related, as part of the interface design process. However, such declarative models are not usually directly executable, and may be difficult to relate to existing software components. It is therefore important that MB-UIDEs both fit in well with existing software architectures and standards, and provide an effective route from declarative interface specification to running user interfaces. This paper describes how user interface software is generated from declarative descriptions in the Teallach MB-UIDE. Distinctive features of Teallach include its open architecture, which connects directly to existing applications and widget sets, and the generation of executable interface applications in Java. This paper focuses on how Java programs, organized using the model-view-controller pattern (MVC), are generated from the task, domain and presentation models of Teallach.",
"title": ""
}
] | scidocsrr |
d2f41b7b54666c0c6d95140ca3095cc6 | PALM-COEIN FIGO Classification for diagnosis of Abnormal Uterine Bleeding : Practical Utility of same at Tertiary Care Centre in North India | [
{
"docid": "cfa8e5af1a37c96617164ea319dba4a5",
"text": "In 2011, the FIGO classification system (PALM-COEIN) was published to standardize terminology, diagnostic and investigations of causes of abnormal uterine bleeding (AUB). According to FIGO new classification, in the absence of structural etiology, the formerly called \"dysfunctional uterine bleeding\" should be avoided and clinicians should state if AUB are caused by coagulation disorders (AUB-C), ovulation disorder (AUB-O), or endometrial primary dysfunction (AUB-E). Since this publication, some societies have released or revised their guidelines for the diagnosis and the management of the formerly called \"dysfunctional uterine bleeding\" according new FIGO classification. In this review, we summarize the most relevant new guidelines for the diagnosis and the management of AUB-C, AUB-O, and AUB-E.",
"title": ""
}
] | [
{
"docid": "c5081f86c4a173a40175e65b05d9effb",
"text": "Convergence insufficiency is characterized by an inability to maintain effortless alignment of the two eyes (binocular convergence) while performing near tasks. Conventional rehabilitative vision therapy for the condition is monotonous and dull, leading to low levels of compliance. If the therapy is not performed then improvements in the condition are unlikely. This paper examines the use of computer games as a new delivery paradigm for vision therapy, specifically at how they can be used in the treatment of convergence insufficiency while at home. A game was created and tested in a small scale clinical trial. Results show clinical improvements, as well as high levels of compliance and motivation. Additionally, the game was able to objectively track patient progress and compliance.",
"title": ""
},
{
"docid": "c69e249b0061057617eb8c70d26df0b4",
"text": "This paper explores the use of GaN MOSFETs and series-connected inverter segments to realize an IMMD. The proposed IMMD topology reduces the segment voltage and offers an opportunity to utilize wide bandgap 200V GaN MOSFETs. Consequently, a reduction in IMMD size is achieved by eliminating inverter heat sink and optimizing the choice of DC-link capacitors. Gate signals of the IMMD segments are shifted (interleaved) to cancel the capacitor voltage ripple and further reduce the capacitor size. Motor winding configuration and coupling effect are also investigated to match with the IMMD design. An actively controlled balancing resistor is programmed to balance the voltages of series connected IMMD segments. Furthermore, this paper presents simulation results as well as experiment results to validate the proposed design.",
"title": ""
},
{
"docid": "981d140731d8a3cdbaebacc1fd26484a",
"text": "A new wideband bandpass filter (BPF) with composite short- and open-circuited stubs has been proposed in this letter. With the two kinds of stubs, two pairs of transmission zeros (TZs) can be produced on the two sides of the desired passband. The even-/odd-mode analysis method is used to derive the input admittances of its bisection circuits. After the Richard's transformation, these bisection circuits are in the same format of two LC circuits. By combining these two LC circuits, the equivalent circuit of the proposed filter is obtained. Through the analysis of the equivalent circuit, the open-circuited stubs introduce transmission poles in the complex frequencies and one pair of TZs in the real frequencies, and the short-circuited stubs generate one pair of TZs to block the dc component. A wideband BPF is designed and fabricated to verify the proposed design principle.",
"title": ""
},
{
"docid": "b68001bf953e63db5ef12be3b20a90aa",
"text": "Contrast sensitivity (CS) is the ability of the observer to discriminate between adjacent stimuli on the basis of their differences in relative luminosity (contrast) rather than their absolute luminances. In previous studies, using a narrow range of species, birds have been reported to have low contrast detection thresholds relative to mammals and fishes. This was an unexpected finding because birds had been traditionally reported to have excellent visual acuity and color vision. This study reports CS in six species of birds that represent a range of visual adaptations to varying environments. The species studied were American kestrels (Falco sparverius), barn owls (Tyto alba), Japanese quail (Coturnix coturnix japonica), white Carneaux pigeons (Columba livia), starlings (Sturnus vulgaris), and red-bellied woodpeckers (Melanerpes carolinus). Contrast sensitivity functions (CSFs) were obtained from these birds using the pattern electroretinogram and compared with CSFs from the literature when possible. All of these species exhibited low CS relative to humans and most mammals, which suggests that low CS is a general characteristic of birds. Their low maximum CS may represent a trade-off of contrast detection for some other ecologically vital capacity such as UV detection or other aspects of their unique color vision.",
"title": ""
},
{
"docid": "8e7d3462f93178f6c2901a429df22948",
"text": "This article analyzes China's pension arrangement and notes that China has recently established a universal non-contributory pension plan covering urban non-employed workers and all rural residents, combined with the pension plan covering urban employees already in place. Further, in the latest reform, China has discontinued the special pension plan for civil servants and integrated this privileged welfare class into the urban old-age pension insurance program. With these steps, China has achieved a degree of universalism and integration of its pension arrangement unprecedented in the non-Western world. Despite this radical pension transformation strategy, we argue that the current Chinese pension arrangement represents a case of \"incomplete\" universalism. First, its benefit level is low. Moreover, the benefit level varies from region to region. Finally, universalism in rural China has been undermined due to the existence of the \"policy bundle.\" Additionally, we argue that the 2015 pension reform has created a situation in which the stratification of Chinese pension arrangements has been \"flattened,\" even though it remains stratified to some extent.",
"title": ""
},
{
"docid": "d9791131cefcf0aa18befb25c12b65b2",
"text": "Medical record linkage is becoming increasingly important as clinical data is distributed across independent sources. To improve linkage accuracy we studied different name comparison methods that establish agreement or disagreement between corresponding names. In addition to exact raw name matching and exact phonetic name matching, we tested three approximate string comparators. The approximate comparators included the modified Jaro-Winkler method, the longest common substring, and the Levenshtein edit distance. We also calculated the combined root-mean square of all three. We tested each name comparison method using a deterministic record linkage algorithm. Results were consistent across both hospitals. At a threshold comparator score of 0.8, the Jaro-Winkler comparator achieved the highest linkage sensitivities of 97.4% and 97.7%. The combined root-mean square method achieved sensitivities higher than the Levenshtein edit distance or long-est common substring while sustaining high linkage specificity. Approximate string comparators increase deterministic linkage sensitivity by up to 10% compared to exact match comparisons and represent an accurate method of linking to vital statistics data.",
"title": ""
},
{
"docid": "645395d46f653358d942742711d50c0b",
"text": "Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we propose ShapeNet, a generalization of the popular convolutional neural networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract “patches”, which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use ShapeNet to learn invariant shape feature descriptors that significantly outperform recent state-of-the-art methods, and show that previous approaches such as heat and wave kernel signatures, optimal spectral descriptors, and intrinsic shape contexts can be obtained as particular configurations of ShapeNet. CR Categories: I.2.6 [Artificial Intelligence]: Learning— Connectionism and neural nets",
"title": ""
},
{
"docid": "f05cb5a3aeea8c4151324ad28ad4dc93",
"text": "With the discovery of induced pluripotent stem (iPS) cells, it is now possible to convert differentiated somatic cells into multipotent stem cells that have the capacity to generate all cell types of adult tissues. Thus, there is a wide variety of applications for this technology, including regenerative medicine, in vitro disease modeling, and drug screening/discovery. Although biological and biochemical techniques have been well established for cell reprogramming, bioengineering technologies offer novel tools for the reprogramming, expansion, isolation, and differentiation of iPS cells. In this article, we review these bioengineering approaches for the derivation and manipulation of iPS cells and focus on their relevance to regenerative medicine.",
"title": ""
},
{
"docid": "4345ed089e019402a5a4e30497bccc8a",
"text": "BACKGROUND\nFluridil, a novel topical antiandrogen, suppresses the human androgen receptor. While highly hydrophobic and hydrolytically degradable, it is systemically nonresorbable. In animals, fluridil demonstrated high local and general tolerance.\n\n\nOBJECTIVE\nTo evaluate the safety and efficacy of a topical anti- androgen, fluridil, in male androgenetic alopecia.\n\n\nMETHODS\nIn 20 men, for 21 days, occlusive forearm patches with 2, 4, and 6% fluridil, isopropanol, and/or vaseline were applied. In 43 men with androgenetic alopecia (AGA), Norwood grade II-Va, 2% fluridil was evaluated in a double-blind, placebo-controlled study after 3 months clinically by phototrichograms, hematology, and blood chemistry including analysis for fluridil, and at 9 months by phototrichograms.\n\n\nRESULTS\nNeither fluridil nor isopropanol showed sensitization/irritation potential, unlike vaseline. In all AGA subjects, baseline anagen/telogen counts were equal. After 3 months, the average anagen percentage did not change in placebo subjects, but increased in fluridil subjects from 76% to 85%, and at 9 months to 87%. In former placebo subjects, fluridil increased the anagen percentage after 6 months from 76% to 85%. Sexual functions, libido, hematology, and blood chemistry values were normal throughout, except that at 3 months, in the spring, serum testosterone increased within the normal range equally in placebo and fluridil groups. No fluridil or its decomposition product, BP-34, was detectable in the serum at 0, 3, or 90 days.\n\n\nCONCLUSION\nTopical fluridil is nonirritating, nonsensitizing, nonresorbable, devoid of systemic activity, and anagen promoting after daily use in most AGA males.",
"title": ""
},
{
"docid": "dd211105651b376b40205eb16efe1c25",
"text": "WBAN based medical-health technologies have great potential for continuous monitoring in ambulatory settings, early detection of abnormal conditions, and supervised rehabilitation. They can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Continuous monitoring with early detection likely has the potential to provide patients with an increased level of confidence, which in turn may improve quality of life. In addition, ambulatory monitoring will allow patients to engage in normal activities of daily life, rather than staying at home or close to specialized medical services. Last but not least, inclusion of continuous monitoring data into medical databases will allow integrated analysis of all data to optimize individualized care and provide knowledge discovery through integrated data mining. Indeed, with the current technological trend toward integration of processors and wireless interfaces, we will soon have coin-sized intelligent sensors. They will be applied as skin patches, seamlessly integrated into a personal monitoring system, and worn for extended periods of time.",
"title": ""
},
{
"docid": "f5e56872c66a126ada7d54c218c06836",
"text": "INTRODUCTION\nGender dysphoria, a marked incongruence between one's experienced gender and biological sex, is commonly believed to arise from discrepant cerebral and genital sexual differentiation. With the discovery that estrogen receptor β is associated with female-to-male (FtM) but not with male-to-female (MtF) gender dysphoria, and given estrogen receptor α involvement in central nervous system masculinization, it was hypothesized that estrogen receptor α, encoded by the ESR1 gene, also might be implicated.\n\n\nAIM\nTo investigate whether ESR1 polymorphisms (TA)n-rs3138774, PvuII-rs2234693, and XbaI-rs9340799 and their haplotypes are associated with gender dysphoria in adults.\n\n\nMETHODS\nMolecular analysis was performed in peripheral blood samples from 183 FtM subjects, 184 MtF subjects, and 394 sex- and ethnically-matched controls.\n\n\nMAIN OUTCOME MEASURES\nGenotype and haplotype analyses of the (TA)n-rs3138774, PvuII-rs2234693, and XbaI-rs9340799 polymorphisms.\n\n\nRESULTS\nAllele and genotype frequencies for the polymorphism XbaI were statistically significant only in FtM vs control XX subjects (P = .021 and P = .020). In XX individuals, the A/G genotype was associated with a low risk of gender dysphoria (odds ratio [OR] = 0.34; 95% CI = 0.16-0.74; P = .011); in XY individuals, the A/A genotype implied a low risk of gender dysphoria (OR = 0.39; 95% CI = 0.17-0.89; P = .008). Binary logistic regression showed partial effects for all three polymorphisms in FtM but not in MtF subjects. The three polymorphisms were in linkage disequilibrium: a small number of TA repeats was linked to the presence of PvuII and XbaI restriction sites (haplotype S-T-A), and a large number of TA repeats was linked to the absence of these restriction sites (haplotype L-C-G). In XX individuals, the presence of haplotype L-C-G carried a low risk of gender dysphoria (OR = 0.66; 95% CI = 0.44-0.99; P = .046), whereas the presence of haplotype L-C-A carried a high susceptibility to gender dysphoria (OR = 3.96; 95% CI = 1.04-15.02; P = .044). Global haplotype was associated with FtM gender dysphoria (P = .017) but not with MtF gender dysphoria.\n\n\nCONCLUSIONS\nXbaI-rs9340799 is involved in FtM gender dysphoria in adults. Our findings suggest different genetic programs for gender dysphoria in men and women. Cortés-Cortés J, Fernández R, Teijeiro N, et al. Genotypes and Haplotypes of the Estrogen Receptor α Gene (ESR1) Are Associated With Female-to-Male Gender Dysphoria. J Sex Med 2017;14:464-472.",
"title": ""
},
{
"docid": "c4d0a1cd8a835dc343b456430791035b",
"text": "Social networks offer an invaluable amount of data from which useful information can be obtained on the major issues in society, among which crime stands out. Research about information extraction of criminal events in Social Networks has been done primarily in English language, while in Spanish, the problem has not been addressed. This paper propose a system for extracting spatio-temporally tagged tweets about crime events in Spanish language. In order to do so, it uses a thesaurus of criminality terms and a NER (named entity recognition) system to process the tweets and extract the relevant information. The NER system is based on the implementation OSU Twitter NLP Tools, which has been enhanced for Spanish language. Our results indicate an improved performance in relation to the most relevant tools such as Standford NER and OSU Twitter NLP Tools, achieving 80.95% precision, 59.65% recall and 68.69% F-measure. The end result shows the crime information broken down by place, date and crime committed through a webservice.",
"title": ""
},
{
"docid": "489015cc236bd20f9b2b40142e4b5859",
"text": "We present an experimental study which demonstrates that model checking techniques can be effective in finding synchronization errors in safety critical software when they are combined with a design for verification approach. We apply the concurrency controller design pattern to the implementation of the synchronization operations in Java programs. This pattern enables a modular verification strategy by decoupling the behaviors of the concurrency controllers from the behaviors of the threads that use them using interfaces specified as finite state machines. The behavior of a concurrency controller can be verified with respect to arbitrary numbers of threads using infinite state model checking techniques, and the threads which use the controller classes can be checked for interface violations using finite state model checking techniques. We present techniques for thread isolation which enables us to analyze each thread in the program separately during interface verification. We conducted an experimental study investigating the effectiveness of the presented design for verification approach on safety critical air traffic control software. In this study, we first reengineered the Tactical Separation Assisted Flight Environment (TSAFE) software using the concurrency controller design pattern. Then, using fault seeding, we created 40 faulty versions of TSAFE and used both infinite and finite state verification techniques for finding the seeded faults. The experimental study demonstrated the effectiveness of the presented modular verification approach and resulted in a classification of faults that can be found using the presented approach.",
"title": ""
},
{
"docid": "ae8292c58a58928594d5f3730a6feacf",
"text": "Photoplethysmography (PPG) signals, captured using smart phones are generally noisy in nature. Although they have been successfully used to determine heart rate from frequency domain analysis, further indirect markers like blood pressure (BP) require time domain analysis for which the signal needs to be substantially cleaned. In this paper we propose a methodology to clean such noisy PPG signals. Apart from filtering, the proposed approach reduces the baseline drift of PPG signal to near zero. Furthermore it models each cycle of PPG signal as a sum of 2 Gaussian functions which is a novel contribution of the method. We show that, the noise cleaning effect produces better accuracy and consistency in estimating BP, compared to the state of the art method that uses the 2-element Windkessel model on features derived from raw PPG signal, captured from an Android phone.",
"title": ""
},
{
"docid": "fc2f99fff361e68f154d88da0739bac4",
"text": "Mondor's disease is characterized by thrombophlebitis of the superficial veins of the breast and the chest wall. The list of causes is long. Various types of clothing, mainly tight bras and girdles, have been postulated as causes. We report a case of a 34-year-old woman who referred typical symptoms and signs of Mondor's disease, without other possible risk factors, and showed the cutaneous findings of the tight bra. Therefore, after distinguishing benign causes of Mondor's disease from hidden malignant causes, the clinicians should consider this clinical entity.",
"title": ""
},
{
"docid": "269e2f8bca42d5369f9337aea6191795",
"text": "Today, exposure to new and unfamiliar environments is a necessary part of daily life. Effective communication of location-based information through location-based services has become a key concern for cartographers, geographers, human-computer interaction and professional designers alike. Recently, much attention was directed towards Augmented Reality (AR) interfaces. Current research, however, focuses primarily on computer vision and tracking, or investigates the needs of urban residents, already familiar with their environment. Adopting a user-centred design approach, this paper reports findings from an empirical mobile study investigating how tourists acquire knowledge about an unfamiliar urban environment through AR browsers. Qualitative and quantitative data was used in the development of a framework that shifts the perspective towards a more thorough understanding of the overall design space for such interfaces. The authors analysis provides a frame of reference for the design and evaluation of mobile AR interfaces. The authors demonstrate the application of the framework with respect to optimization of current design of AR.",
"title": ""
},
{
"docid": "95fe3badecc7fa92af6b6aa49b6ff3b2",
"text": "As low-resolution position sensors, a high placement accuracy of Hall-effect sensors is hard to achieve. Accordingly, a commutation angle error is generated. The commutation angle error will inevitably increase the loss of the low inductance motor and even cause serious consequence, which is the abnormal conduction of a freewheeling diode in the unexcited phase especially at high speed. In this paper, the influence of the commutation angle error on the power loss for the high-speed brushless dc motor with low inductance and nonideal back electromotive force in a magnetically suspended control moment gyro (MSCMG) is analyzed in detail. In order to achieve low steady-state loss of an MSCMG for space application, a straightforward method of self-compensation of commutation angle based on dc-link current is proposed. Both simulation and experimental results confirm the feasibility and effectiveness of the proposed method.",
"title": ""
},
{
"docid": "0b6ce2e4f3ef7f747f38068adef3da54",
"text": "Network throughput can be increased by allowing multipath, adaptive routing. Adaptive routing allows more freedom in the paths taken by messages, spreading load over physical channels more evenly. The flexibility of adaptive routing introduces new possibilities of deadlock. Previous deadlock avoidance schemes in k-ary n-cubes require an exponential number of virtual channels, independent of network size and dimension. Planar adaptive routing algorithms reduce the complexity of deadlock prevention by reducing the number of choices at each routing step. In the fault-free case, planar-adaptive networks are guaranteed to be deadlock-free. In the presence of network faults, the planar-adaptive router can be extended with misrouting to produce a working network which remains provably deadlock free and is provably livelock free. In addition, planar adaptive networks can simultaneously support both in-order and adaptive, out-of-order packet delivery.\nPlanar-adaptive routing is of practical significance. It provides the simplest known support for deadlock-free adaptive routing in k-ary n-cubes of more than two dimensions (with k > 2). Restricting adaptivity reduces the hardware complexity, improving router speed or allowing additional performance-enhancing network features. The structure of planar-adaptive routers is amenable to efficient implementation.",
"title": ""
},
{
"docid": "488c7437a32daec6fbad12e07bb31f4c",
"text": "Studying characters plays a vital role in computationally representing and interpreting narratives. Unlike previous work, which has focused on inferring character roles, we focus on the problem of modeling their relationships. Rather than assuming a fixed relationship for a character pair, we hypothesize that relationships temporally evolve with the progress of the narrative, and formulate the problem of relationship modeling as a structured prediction problem. We propose a semisupervised framework to learn relationship sequences from fully as well as partially labeled data. We present a Markovian model capable of accumulating historical beliefs about the relationship and status changes. We use a set of rich linguistic and semantically motivated features that incorporate world knowledge to investigate the textual content of narrative. We empirically demonstrate that such a framework outperforms competitive baselines.",
"title": ""
},
{
"docid": "cd3d9bb066729fc7107c0fef89f664fe",
"text": "The extended contact hypothesis proposes that knowledge that an in-group member has a close relationship with an out-group member can lead to more positive intergroup attitudes. Proposed mechanisms are the in-group or out-group member serving as positive exemplars and the inclusion of the out-group member's group membership in the self. In Studies I and 2, respondents knowing an in-group member with an out-group friend had less negative attitudes toward that out-group, even controlling for disposition.il variables and direct out-group friendships. Study 3, with constructed intergroup-conflict situations (on the robbers cave model). found reduced negative out-group attitudes after participants learned of cross-group friendships. Study 4, a minimal group experiment, showed less negative out-group attitudes for participants observing an apparent in-group-out-group friendship.",
"title": ""
}
] | scidocsrr |
0fd83d74ab36ececf73c967044b74754 | Convolutional Neural Networks for Crop Yield Prediction using Satellite Images | [
{
"docid": "082630a33c0cc0de0e60a549fc57d8e8",
"text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.",
"title": ""
}
] | [
{
"docid": "efb52e33aee3e3cbf33d04cda77f4d7d",
"text": "With the growing amount of information and availability of opinion-rich resources, it is sometimes difficult for a common man to analyse what others think of. To analyse this information and to see what people in general think or feel of a product or a service is the problem of Sentiment Analysis. Sentiment analysis or Sentiment polarity labelling is an emerging field, so this needs to be accurate. In this paper, we explore various Machine Learning techniques for the classification of Telugu sentences into positive or negative polarities.",
"title": ""
},
{
"docid": "8b5f2d45852cf5c8e1edb6146d37abb7",
"text": "Portable, embedded systems place ever-increasing demands on high-performance, low-power microprocessor design. Dynamic voltage and frequency scaling (DVFS) is a well-known technique to reduce energy in digital systems, but the effectiveness of DVFS is hampered by slow voltage transitions that occur on the order of tens of microseconds. In addition, the recent trend towards chip-multiprocessors (CMP) executing multi-threaded workloads with heterogeneous behavior motivates the need for per-core DVFS control mechanisms. Voltage regulators that are integrated onto the same chip as the microprocessor core provide the benefit of both nanosecond-scale voltage switching and per-core voltage control. We show that these characteristics provide significant energy-saving opportunities compared to traditional off-chip regulators. However, the implementation of on-chip regulators presents many challenges including regulator efficiency and output voltage transient characteristics, which are significantly impacted by the system-level application of the regulator. In this paper, we describe and model these costs, and perform a comprehensive analysis of a CMP system with on-chip integrated regulators. We conclude that on-chip regulators can significantly improve DVFS effectiveness and lead to overall system energy savings in a CMP, but architects must carefully account for overheads and costs when designing next-generation DVFS systems and algorithms.",
"title": ""
},
{
"docid": "ccaa01441d7de9009dea10951a3ea2f3",
"text": "for Natural Language A First Course in Computational Semanti s Volume II Working with Dis ourse Representation Stru tures Patri k Bla kburn & Johan Bos September 3, 1999",
"title": ""
},
{
"docid": "4802e7ed9d911ccbe92b55f04998f3f1",
"text": "Sixteen incidents involving dog bites fitting the description \"severe\" were identified among 5,711 dog bite incidents reported to health departments in five South Carolina counties (population 750,912 in 1980) between July 1, 1979, and June 30, 1982. A \"severe\" attack was defined as one in which the dog \"repeatedly bit or vigorously shook its victim, and the victim or the person intervening had extreme difficulty terminating the attack.\" Information from health department records was clarified by interviews with animal control officers, health and police officials, and persons with firsthand knowledge of the events. Investigation disclosed that the dogs involved in the 16 severe attacks were reproductively intact males. The median age of the dogs was 3 years. A majority of the attacks were by American Staffordshire terriers, St. Bernards, and cocker spaniels. Ten of the dogs had been aggressive toward people or other dogs before the incident that was investigated. Ten of the 16 victims of severe attacks were 10 years of age or younger; the median age of all 16 victims was 8 years. Twelve of the victims either were members of the family that owned the attacking dog or had had contact with the dog before the attack. Eleven of the victims were bitten on the head, neck, or shoulders. In 88 percent of the cases, the attacks took place in the owner's yard or home, or in the adjoining yard. In 10 of the 16 incidents, members of the victims' families witnessed the attacks. The characteristics of these attacks, only one of which proved fatal, were similar in many respects to those that have been reported for other dog bite incidents that resulted in fatalities. On the basis of this study, the author estimates that a risk of 2 fatalities per 1,000 reported dog bites may exist nationwide. Suggestions made for the prevention of severe attacks focus on changing the behavior of both potential canine attackers and potential victims.",
"title": ""
},
{
"docid": "d3b6fcc353382c947cfb0b4a73eda0ef",
"text": "Robust object tracking is a challenging task in computer vision. To better solve the partial occlusion issue, part-based methods are widely used in visual object trackers. However, due to the complicated online training and updating process, most of these part-based trackers cannot run in real-time. Correlation filters have been used in tracking tasks recently because of the high efficiency. However, the conventional correlation filter based trackers cannot deal with occlusion. Furthermore, most correlation filter based trackers fix the scale and rotation of the target which makes the trackers unreliable in long-term tracking tasks. In this paper, we propose a novel tracking method which track objects based on parts with multiple correlation filters. Our method can run in real-time. Additionally, the Bayesian inference framework and a structural constraint mask are adopted to enable our tracker to be robust to various appearance changes. Extensive experiments have been done to prove the effectiveness of our method.",
"title": ""
},
{
"docid": "799b39e8c8d8bd86b8eae0d74a8b5ee4",
"text": "The photovoltaic (PV) string under partially shaded conditions exhibits complex output characteristics, i.e., the current–voltage <inline-formula> <tex-math notation=\"LaTeX\">$(I\\mbox{--}V)$</tex-math></inline-formula> curve presents multiple current stairs, whereas the power–voltage <inline-formula> <tex-math notation=\"LaTeX\">$(P\\mbox{--}V)$</tex-math></inline-formula> curve shows multiple power peaks. Thus, the conventional maximum power point tracking (MPPT) method is not acceptable either on tracking accuracy or on tracking speed. In this paper, two global MPPT methods, namely, the search–skip–judge global MPPT (SSJ-GMPPT) and rapid global MPPT (R-GMPPT) methods are proposed in terms of reducing the searching voltage range based on comprehensive study of <inline-formula> <tex-math notation=\"LaTeX\">$I\\mbox{--}V$</tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$P\\mbox{--}V$</tex-math></inline-formula> characteristics of PV string. The SSJ-GMPPT method can track the real maximum power point under any shading conditions and achieve high accuracy and fast tracking speed without additional circuits and sensors. The R-GMPPT method aims to enhance the tracking speed of long string with vast PV modules and reduces more than 90% of the tracking time that is consumed by the conventional global searching method. The improved performance of the two proposed methods has been validated by experimental results on a PV string. The comparison with other methods highlights the two proposed methods more powerful.",
"title": ""
},
{
"docid": "3bc48489d80e824efb7e3512eafc6f30",
"text": "GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70% of the routes suggested by our method are faster than the competing methods, and 20% of the routes share the same results. On average, 50% of our routes are at least 20% faster than the competing approaches.",
"title": ""
},
{
"docid": "7699f4fa25a47fca0de320b8bbe6ff00",
"text": "Homeland Security (HS) is a growing field of study in the U.S. today, generally covering risk management, terrorism studies, policy development, and other topics related to the broad field. Information security threats to both the public and private sectors are growing in intensity, frequency, and severity, and are a very real threat to the security of the nation. While there are many models for information security education at all levels of higher education, these programs are invariably offered as a technical course of study, these curricula are generally not well suited to HS students. As a result, information systems and cyber security principles are under represented in the typical HS program. The authors propose a course of study in cyber security designed to capitalize on the intellectual strengths of students in this discipline and that are consistent with the broad suite of professional needs in this discipline.",
"title": ""
},
{
"docid": "7143493c6a2abe3da9eb4c98da31c620",
"text": "We study probability measures induced by set functions with constraints. Such measures arise in a variety of real-world settings, where prior knowledge, resource limitations, or other pragmatic considerations impose constraints. We consider the task of rapidly sampling from such constrained measures, and develop fast Markov chain samplers for them. Our first main result is for MCMC sampling from Strongly Rayleigh (SR) measures, for which we present sharp polynomial bounds on the mixing time. As a corollary, this result yields a fast mixing sampler for Determinantal Point Processes (DPPs), yielding (to our knowledge) the first provably fast MCMC sampler for DPPs since their inception over four decades ago. Beyond SR measures, we develop MCMC samplers for probabilistic models with hard constraints and identify sufficient conditions under which their chains mix rapidly. We illustrate our claims by empirically verifying the dependence of mixing times on the key factors governing our theoretical bounds.",
"title": ""
},
{
"docid": "676f5528ea9fdc0337dcdac3a6a56383",
"text": "Online Social Networks (OSNs) are becoming a popular method of meeting people and keeping in touch with friends. OSNs resort to trust evaluation models and algorithms to improve service quality and enhance user experiences. Much research has been done to evaluate trust and predict the trustworthiness of a target, usually from the view of a source. Graph-based approaches make up a major portion of the existing works, in which the trust value is calculated through a trusted graph (or trusted network, web of trust, or multiple trust chains). In this article, we focus on graph-based trust evaluation models in OSNs, particularly in the computer science literature. We first summarize the features of OSNs and the properties of trust. Then we comparatively review two categories of graph-simplification-based and graph-analogy-based approaches and discuss their individual problems and challenges. We also analyze the common challenges of all graph-based models. To provide an integrated view of trust evaluation, we conduct a brief review of its pre- and postprocesses (i.e., the preparation and validation of trust models, including information collection, performance evaluation, and related applications). Finally, we identify some open challenges that all trust models are facing.",
"title": ""
},
{
"docid": "c18037d7efce8348f0f06e3f3f83e187",
"text": "Ovotesticular disorder of sex development (OTDSD) is a rare condition and defined as the presence of ovarian and testicular tissue in the same individual. Most of patients with OTDSD have female internal genital organs. In this report, we present a case in which, we demonstrated prostate tissue using endoscopic and radiologic methods in a 46-XX, sex determining region of the Y chromosome negative male phenotypic patient, with no female internal genitalia. Existence of prostate in an XX male without SRY is rarely seen and reveals a complete male phenotype. This finding is critical to figure out what happens in embryonal period.",
"title": ""
},
{
"docid": "a7bc0af9b764021d1f325b1edfbfd700",
"text": "BACKGROUND\nIn the treatment of schizophrenia, changing antipsychotics is common when one treatment is suboptimally effective, but the relative effectiveness of drugs used in this strategy is unknown. This randomized, double-blind study compared olanzapine, quetiapine, risperidone, and ziprasidone in patients who had just discontinued a different atypical antipsychotic.\n\n\nMETHOD\nSubjects with schizophrenia (N=444) who had discontinued the atypical antipsychotic randomly assigned during phase 1 of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) investigation were randomly reassigned to double-blind treatment with a different antipsychotic (olanzapine, 7.5-30 mg/day [N=66]; quetiapine, 200-800 mg/day [N=63]; risperidone, 1.5-6.0 mg/day [N=69]; or ziprasidone, 40-160 mg/day [N=135]). The primary aim was to determine if there were differences between these four treatments in effectiveness measured by time until discontinuation for any reason.\n\n\nRESULTS\nThe time to treatment discontinuation was longer for patients treated with risperidone (median: 7.0 months) and olanzapine (6.3 months) than with quetiapine (4.0 months) and ziprasidone (2.8 months). Among patients who discontinued their previous antipsychotic because of inefficacy (N=184), olanzapine was more effective than quetiapine and ziprasidone, and risperidone was more effective than quetiapine. There were no significant differences between antipsychotics among those who discontinued their previous treatment because of intolerability (N=168).\n\n\nCONCLUSIONS\nAmong this group of patients with chronic schizophrenia who had just discontinued treatment with an atypical antipsychotic, risperidone and olanzapine were more effective than quetiapine and ziprasidone as reflected by longer time until discontinuation for any reason.",
"title": ""
},
{
"docid": "4921d1967a5d05f72a53e5628cac1a8e",
"text": "This paper describes an architecture for controlling non-player characters (NPC) in the First Person Shooter (FPS) game Unreal Tournament 2004. Specifically, the DRE-Bot architecture is made up of three reinforcement learners, Danger, Replenish and Explore, which use the tabular Sarsa(λ) algorithm. This algorithm enables the NPC to learn through trial and error building up experience over time in an approach inspired by human learning. Experimentation is carried to measure the performance of DRE-Bot when competing against fixed strategy bots that ship with the game. The discount parameter, γ, and the trace parameter, λ, are also varied to see if their values have an effect on the performance.",
"title": ""
},
{
"docid": "d98f60a2a0453954543da840076e388a",
"text": "The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific language to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization performance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.",
"title": ""
},
{
"docid": "339f7a0031680a2d930f143700d66d5e",
"text": "We propose an approach to generate natural language questions from knowledge graphs such as DBpedia and YAGO. We stage this in the setting of a quiz game. Our approach, though, is general enough to be applicable in other settings. Given a topic of interest (e.g., Soccer) and a difficulty (e.g., hard), our approach selects a query answer, generates a SPARQL query having the answer as its sole result, before verbalizing the question.",
"title": ""
},
{
"docid": "1c4165c47ae9870e31a7106f1b82e94d",
"text": "INTRODUCTION\nPrevious studies found that aircraft maintenance workers may be exposed to organophosphates in hydraulic fluid and engine oil. Studies have also illustrated a link between long-term low-level organophosphate pesticide exposure and depression.\n\n\nMETHODS\nA questionnaire containing the Patient Health Questionnaire 8 depression screener was e-mailed to 52,080 aircraft maintenance workers (with N = 4801 complete responses) in a cross-sectional study to determine prevalence and severity of depression and descriptions of their occupational exposures.\n\n\nRESULTS\nThere was no significant difference between reported depression prevalence and severity in similar exposure groups in which aircraft maintenance workers were exposed or may have been exposed to organophosphate esters compared to similar exposure groups in which they were not exposed. However, a dichotomous measure of the prevalence of depression was significantly associated with self-reported exposure levels from low (OR: 1.21) to moderate (OR: 1.68) to high exposure (OR: 2.70) and with each exposure route including contact (OR: 1.68), inhalation (OR: 2.52), and ingestion (OR: 2.55). A self-reported four-level measure of depression severity was also associated with a self-reported four-level measure of exposure.\n\n\nDISCUSSION\nBased on self-reported exposures and outcomes, an association is observed between organophosphate exposure and depression; however, we cannot assume that the associations we observed are causal because some workers may have been more likely to report exposure to organophosphate esters and also more likely to report depression. Future studies should consider using a larger sample size, better methods for characterizing crew chief exposures, and bioassays to measure dose rather than exposure. Hardos JE, Whitehead LW, Han I, Ott DK, Waller DK. Depression prevalence and exposure to organophosphate esters in aircraft maintenance workers. Aerosp Med Hum Perform. 2016; 87(8):712-717.",
"title": ""
},
{
"docid": "db483f6aab0361ce5a3ad1a89508541b",
"text": "In this paper, we describe Swoop, a hypermedia inspired Ontology Browser and Editor based on OWL, the recently standardized Web-oriented ontology language. After discussing the design rationale and architecture of Swoop, we focus mainly on its features, using illustrative examples to highlight its use. We demonstrate that with its web-metaphor, adherence to OWL recommendations and key unique features such as Collaborative Annotation using Annotea, Swoop acts as a useful and efficient web ontology development tool. We conclude with a list of future plans for Swoop, that should further increase its overall appeal and accessibility.",
"title": ""
},
{
"docid": "b418470025d74d745e75225861a1ed7e",
"text": "The brain which is composed of more than 100 billion nerve cells is a sophisticated biochemical factory. For many years, neurologists, psychotherapists, researchers, and other health care professionals have studied the human brain. With the development of computer and information technology, it makes brain complex spectrum analysis to be possible and opens a highlight field for the study of brain science. In the present work, observation and exploring study of the activities of brain under brainwave music stimulus are systemically made by experimental and spectrum analysis technology. From our results, the power of the 10.5Hz brainwave appears in the experimental figures, it was proved that upper alpha band is entrained under the special brainwave music. According to the Mozart effect and the analysis of improving memory performance, the results confirm that upper alpha band is indeed related to the improvement of learning efficiency.",
"title": ""
},
{
"docid": "a52b452f1fb7e1b48a1f3f50ea8a95a7",
"text": "Domain Adaptation (DA) techniques aim at enabling machine learning methods learn effective classifiers for a “target” domain when the only available training data belongs to a different “source” domain. In this extended abstract we briefly describe a new DA method called Distributional Correspondence Indexing (DCI) for sentiment classification. DCI derives term representations in a vector space common to both domains where each dimension reflects its distributional correspondence to a pivot, i.e., to a highly predictive term that behaves similarly across domains. The experiments we have conducted show that DCI obtains better performance than current state-of-theart techniques for cross-lingual and cross-domain sentiment classification.",
"title": ""
},
{
"docid": "af1cc16cae083e8b07e53dc82d5ca68f",
"text": "People often share emotions with others in order to manage their emotional experiences. We investigate how social media properties such as visibility and directedness affect how people share emotions in Facebook, and their satisfaction after doing so. 141 participants rated 1,628 of their own recent status updates, posts they made on others' timelines, and private messages they sent for intensity, valence, personal relevance, and overall satisfaction felt after sharing each message. For network-visible channels-status updates and posts on others' timelines-they also rated their satisfaction with replies they received. People shared differently between channels, with more intense and negative emotions in private messages. People felt more satisfied after sharing more positive emotions in all channels and after sharing more personally relevant emotions in network-visible channels. Finally, people's overall satisfaction after sharing emotions in network-visible channels is strongly tied to their reply satisfaction. Quality of replies, not just quantity, matters, suggesting the need for designs that help people receive valuable responses to their shared emotions.",
"title": ""
}
] | scidocsrr |
8b36ff5c2e3231681101f569f07189d4 | Physical Human Activity Recognition Using Wearable Sensors | [
{
"docid": "e700afa9064ef35f7d7de40779326cb0",
"text": "Human activity recognition is important for many applications. This paper describes a human activity recognition framework based on feature selection techniques. The objective is to identify the most important features to recognize human activities. We first design a set of new features (called physical features) based on the physical parameters of human motion to augment the commonly used statistical features. To systematically analyze the impact of the physical features on the performance of the recognition system, a single-layer feature selection framework is developed. Experimental results indicate that physical features are always among the top features selected by different feature selection methods and the recognition accuracy is generally improved to 90%, or 8% better than when only statistical features are used. Moreover, we show that the performance is further improved by 3.8% by extending the single-layer framework to a multi-layer framework which takes advantage of the inherent structure of human activities and performs feature selection and classification in a hierarchical manner.",
"title": ""
},
{
"docid": "931c75847fdfec787ad6a31a6568d9e3",
"text": "This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development.",
"title": ""
}
] | [
{
"docid": "bdffdfe92df254d0b13c1a1c985c0400",
"text": "We propose a model to automatically describe changes introduced in the source code of a program using natural language. Our method receives as input a set of code commits, which contains both the modifications and message introduced by an user. These two modalities are used to train an encoder-decoder architecture. We evaluated our approach on twelve real world open source projects from four different programming languages. Quantitative and qualitative results showed that the proposed approach can generate feasible and semantically sound descriptions not only in standard in-project settings, but also in a cross-project setting.",
"title": ""
},
{
"docid": "d7d0fa6279b356d37c2f64197b3d721d",
"text": "Estimating the pose of a human in 3D given an image or a video has recently received significant attention from the scientific community. The main reasons for this trend are the ever increasing new range of applications (e.g., humanrobot interaction, gaming, sports performance analysis) which are driven by current technological advances. Although recent approaches have dealt with several challenges and have reported remarkable results, 3D pose estimation remains a largely unsolved problem because real-life applications impose several challenges which are not fully addressed by existing methods. For example, estimating the 3D pose of multiple people in an outdoor environment remains a largely unsolved problem. In this paper, we review the recent advances in 3D human pose estimation from RGB images or image sequences. We propose a taxonomy of the approaches based on the input (e.g., single image or video, monocular or multi-view) and in each case we categorize the methods according to their key characteristics. To provide an overview of the current capabilities, we conducted an extensive experimental evaluation of state-of-the-art approaches in a synthetic dataset created specifically for this task, which along with its ground truth is made publicly available for research purposes. Finally, we provide an in-depth discussion of the insights obtained from reviewing the literature and the results of our experiments. Future directions and challenges are identified.",
"title": ""
},
{
"docid": "24a117cf0e59591514dd8630bcd45065",
"text": "This work presents a coarse-grained distributed genetic algorithm (GA) for RNA secondary structure prediction. This research builds on previous work and contains two new thermodynamic models, INN and INN-HB, which add stacking-energies using base pair adjacencies. Comparison tests were performed against the original serial GA on known structures that are 122, 543, and 784 nucleotides in length on a wide variety of parameter settings. The effects of the new models are investigated, the predicted structures are compared to known structures and the GA is compared against a serial GA with identical models. Both algorithms perform well and are able to predict structures with high accuracy for short sequences.",
"title": ""
},
{
"docid": "bd60ecd918eba443e0772d4edbec6ba4",
"text": "Le ModeÁ le de Culture Fit explique la manieÁ re dont l'environnement socioculturel influence la culture interne au travail et les pratiques de la direction des ressources humaines. Ce modeÁ le a e te teste sur 2003 salarie s d'entreprises prive es dans 10 pays. Les participants ont rempli un questionnaire de 57 items, destine aÁ mesurer les perceptions de la direction sur 4 dimensions socioculturelles, 6 dimensions de culture interne au travail, et les pratiques HRM (Management des Ressources Humaines) dans 3 zones territoiriales. Une analyse ponde re e par re gressions multiples, au niveau individuel, a montre que les directeurs qui caracte risaient leurs environnement socio-culturel de facË on fataliste, supposaient aussi que les employe s n'e taient pas malle ables par nature. Ces directeurs ne pratiquaient pas l'enrichissement des postes et donnaient tout pouvoir au controà le et aÁ la re mune ration en fonction des performances. Les directeurs qui appre ciaient une grande loyaute des APPLIED PSYCHOLOGY: AN INTERNATIONAL REVIEW, 2000, 49 (1), 192±221",
"title": ""
},
{
"docid": "b91833ae4e659fc1a0943eadd5da955d",
"text": "In this paper, we present a factor graph framework to solve both estimation and deterministic optimal control problems, and apply it to an obstacle avoidance task on Unmanned Aerial Vehicles (UAVs). We show that factor graphs allow us to consistently use the same optimization method, system dynamics, uncertainty models and other internal and external parameters, which potentially improves the UAV performance as a whole. To this end, we extended the modeling capabilities of factor graphs to represent nonlinear dynamics using constraint factors. For inference, we reformulate Sequential Quadratic Programming as an optimization algorithm on a factor graph with nonlinear constraints. We demonstrate our framework on a simulated quadrotor in an obstacle avoidance application.",
"title": ""
},
{
"docid": "dbde47a4142bffc2bcbda988781e5229",
"text": "Grasping individual objects from an unordered pile in a box has been investigated in static scenarios so far. In this paper, we demonstrate bin picking with an anthropomorphic mobile robot. To this end, we extend global navigation techniques by precise local alignment with a transport box. Objects are detected in range images using a shape primitive-based approach. Our approach learns object models from single scans and employs active perception to cope with severe occlusions. Grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task.",
"title": ""
},
{
"docid": "525f188960eeb7a66ef9734118609f79",
"text": "Creativity is important for young children learning mathematics. However, much literature has claimed creativity in the learning of mathematics for young children is not adequately supported by teachers in the classroom due to such reasons as teachers’ poor college preparation in mathematics content knowledge, teachers’ negativity toward creative students, teachers’ occupational pressure, and low quality curriculum. The purpose of this grounded theory study was to generate a model that describes and explains how a particular group of early childhood teachers make sense of creativity in the learning of mathematics and how they think they can promote or fail to promote creativity in the classroom. In-depth interviews with 30 Kto Grade-3 teachers, participating in a graduate mathematics specialist certificate program in a medium-sized Midwestern city, were conducted. In contrast to previous findings, these teachers did view mathematics in young children (age 5 to 9) as requiring creativity, in ways that aligned with Sternberg and Lubart’s (1995) investment theory of creativity. Teachers felt they could support creativity in student learning and knew strategies for how to promote creativity in their practices.",
"title": ""
},
{
"docid": "49f0371f84d7874a6ccc6f9dd0779d3b",
"text": "Managing customer satisfaction has become a crucial issue in fast-food industry. This study aims at identifying determinant factor related to customer satisfaction in fast-food restaurant. Customer data are analyzed by using data mining method with two classification techniques such as decision tree and neural network. Classification models are developed using decision tree and neural network to determine underlying attributes of customer satisfaction. Generated rules are beneficial for managerial and practical implementation in fast-food industry. Decision tree and neural network yield more than 80% of predictive accuracy.",
"title": ""
},
{
"docid": "f8f36ef5822446478b154c9d98847070",
"text": "The objective of this research is to improve traffic safety through collecting and distributing up-to-date road surface condition information using mobile phones. Road surface condition information is seen useful for both travellers and for the road network maintenance. The problem we consider is to detect road surface anomalies that, when left unreported, can cause wear of vehicles, lesser driving comfort and vehicle controllability, or an accident. In this work we developed a pattern recognition system for detecting road condition from accelerometer and GPS readings. We present experimental results from real urban driving data that demonstrate the usefulness of the system. Our contributions are: 1) Performing a throughout spectral analysis of tri-axis acceleration signals in order to get reliable road surface anomaly labels. 2) Comprehensive preprocessing of GPS and acceleration signals. 3) Proposing a speed dependence removal approach for feature extraction and demonstrating its positive effect in multiple feature sets for the road surface anomaly detection task. 4) A framework for visually analyzing the classifier predictions over the validation data and labels.",
"title": ""
},
{
"docid": "d9493bec4d01a39ce230b82a98800bb3",
"text": "Biometrics, an integral component of Identity Science, is widely used in several large-scale-county-wide projects to provide a meaningful way of recognizing individuals. Among existing modalities, ocular biometric traits such as iris, periocular, retina, and eye movement have received significant attention in the recent past. Iris recognition is used in Unique Identification Authority of India’s Aadhaar Program and the United Arab Emirate’s border security programs, whereas the periocular recognition is used to augment the performance of face or iris when only ocular region is present in the image. This paper reviews the research progression in these modalities. The paper discusses existing algorithms and the limitations of each of the biometric traits and information fusion approaches which combine ocular modalities with other modalities. We also propose a path forward to advance the research on ocular recognition by (i) improving the sensing technology, (ii) heterogeneous recognition for addressing interoperability, (iii) utilizing advanced machine learning algorithms for better representation and classification, (iv) developing algorithms for ocular recognition at a distance, (v) using multimodal ocular biometrics for recognition, and (vi) encouraging benchmarking standards and open-source software development. ! 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d956c805ee88d1b0ca33ce3f0f838441",
"text": "The task of relation classification in the biomedical domain is complex due to the presence of samples obtained from heterogeneous sources such as research articles, discharge summaries, or electronic health records. It is also a constraint for classifiers which employ manual feature engineering. In this paper, we propose a convolutional recurrent neural network (CRNN) architecture that combines RNNs and CNNs in sequence to solve this problem. The rationale behind our approach is that CNNs can effectively identify coarse-grained local features in a sentence, while RNNs are more suited for long-term dependencies. We compare our CRNN model with several baselines on two biomedical datasets, namely the i2b22010 clinical relation extraction challenge dataset, and the SemEval-2013 DDI extraction dataset. We also evaluate an attentive pooling technique and report its performance in comparison with the conventional max pooling method. Our results indicate that the proposed model achieves state-of-the-art performance on both datasets.1",
"title": ""
},
{
"docid": "9f74e665d5ca8c84d7b17806163a16ee",
"text": "‘‘This is really still a nightmare — a German nightmare,’’ asserted Mechtilde Maier, Deutsche Telekom’s head of diversity. A multinational company with offices in about 50 countries, Deutsche Telekom is struggling at German headquarters to bring women into its leadership ranks. It is a startling result; at headquarters, one might expect the greatest degree of compliance to commands on high. With only 13% of its leadership positions represented by women, the headquarters is lagging far behind its offices outside Germany, which average 24%. Even progress has been glacial, with an improvement of a mere 0.5% since 2010 versus a 4% increase among its foreign subsidiaries. The phenomenon at Deutsche Telekom reflects a broader pattern, one that manifests in other organizations, in other nations, and in the highest reaches of leadership, including the boardroom. According to the Deloitte Global Centre for Corporate Governance, only about 12% of boardroom seats in the United States are held by women and less than 10% in the United Kingdom (9%), China (8.5%), and India (5%). In stark contrast, these rates are 2—3 times higher in Bulgaria (30%) and Norway (approximately 40%). Organizations are clearly successful in some nations more than others in promoting women to leadership ranks, but why? Instead of a culture’s wealth, values, or practices, our own research concludes that the emergence of women as leaders can be explained in part by a culture’s tightness. Cultural tightness refers to the degree to which a culture has strong norms and low tolerance for deviance. In a tight culture, people might be arrested for spitting, chewing gum, or jaywalking. In loose cultures, although the same behaviors may be met with disapproving glances or fines, they are not sanctioned to the same degree nor are they necessarily seen as taboo. We discovered that women are more likely to emerge as leaders in loose than tight cultures, but with an important exception. Women can emerge as leaders in tight cultures too. Our discoveries highlight that, to promote women to leadership positions, global leaders need to employ strategies that are compatible with the culture’s tightness. Before presenting our findings and their implications, we first discuss the process by which leaders tend to emerge.",
"title": ""
},
{
"docid": "9983792c37341cca7666e2f0d7b42d2b",
"text": "Domain modeling is an important step in the transition from natural-language requirements to precise specifications. For large systems, building a domain model manually is a laborious task. Several approaches exist to assist engineers with this task, whereby candidate domain model elements are automatically extracted using Natural Language Processing (NLP). Despite the existing work on domain model extraction, important facets remain under-explored: (1) there is limited empirical evidence about the usefulness of existing extraction rules (heuristics) when applied in industrial settings; (2) existing extraction rules do not adequately exploit the natural-language dependencies detected by modern NLP technologies; and (3) an important class of rules developed by the information retrieval community for information extraction remains unutilized for building domain models.\n Motivated by addressing the above limitations, we develop a domain model extractor by bringing together existing extraction rules in the software engineering literature, extending these rules with complementary rules from the information retrieval literature, and proposing new rules to better exploit results obtained from modern NLP dependency parsers. We apply our model extractor to four industrial requirements documents, reporting on the frequency of different extraction rules being applied. We conduct an expert study over one of these documents, investigating the accuracy and overall effectiveness of our domain model extractor.",
"title": ""
},
{
"docid": "8fa721c98dac13157bcc891c06561ec7",
"text": "Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total child-care is not yet being promoted, there are indications that it is „on the cards‟. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care.",
"title": ""
},
{
"docid": "74beaea9eccab976dc1ee7b2ddf3e4ca",
"text": "We develop theory that distinguishes trust among employees in typical task contexts (marked by low levels of situational unpredictability and danger) from trust in “highreliability” task contexts (those marked by high levels of situational unpredictability and danger). A study of firefighters showed that trust in high-reliability task contexts was based on coworkers’ integrity, whereas trust in typical task contexts was also based on benevolence and identification. Trust in high-reliability contexts predicted physical symptoms, whereas trust in typical contexts predicted withdrawal. Job demands moderated linkages with performance: trust in high-reliability task contexts was a more positive predictor of performance when unpredictable and dangerous calls were more frequent.",
"title": ""
},
{
"docid": "36fbc5f485d44fd7c8726ac0df5648c0",
"text": "We present “Ouroboros Praos”, a proof-of-stake blockchain protocol that, for the first time, provides security against fully-adaptive corruption in the semi-synchronous setting : Specifically, the adversary can corrupt any participant of a dynamically evolving population of stakeholders at any moment as long the stakeholder distribution maintains an honest majority of stake; furthermore, the protocol tolerates an adversarially-controlled message delivery delay unknown to protocol participants. To achieve these guarantees we formalize and realize in the universal composition setting a suitable form of forward secure digital signatures and a new type of verifiable random function that maintains unpredictability under malicious key generation. Our security proof develops a general combinatorial framework for the analysis of semi-synchronous blockchains that may be of independent interest. We prove our protocol secure under standard cryptographic assumptions in the random oracle model.",
"title": ""
},
{
"docid": "d5eb643385b573706c48cbb2cb3262df",
"text": "This article identifies problems and conditions that contribute to nipple pain during lactation and that may lead to early cessation or noninitiation of breastfeeding. Signs and symptoms of poor latch-on and positioning, oral anomalies, and suckling disorders are reviewed. Diagnosis and treatment of infectious agents that may cause nipple pain are presented. Comfort measures for sore nipples and current treatment recommendations for nipple wound healing are discussed. Suggestions are made for incorporating in-depth breastfeeding content into midwifery education programs.",
"title": ""
},
{
"docid": "ec9f793761ebd5199c6a2cc8c8215ac4",
"text": "A dual-frequency compact printed antenna for Wi-Fi (IEEE 802.11x at 2.45 and 5.5 GHz) applications is presented. The design is successfully optimized using a finite-difference time-domain (FDTD)-algorithm-based procedure. Some prototypes have been fabricated and measured, displaying a very good performance.",
"title": ""
},
{
"docid": "b2aad34d91b5c38f794fc2577593798c",
"text": "We present a model for pricing and hedging derivative securities and option portfolios in an environment where the volatility is not known precisely but is assumed instead to lie between two extreme values min and max These bounds could be inferred from extreme values of the implied volatilities of liquid options or from high low peaks in historical stock or option implied volatilities They can be viewed as de ning a con dence interval for future volatility values We show that the extremal non arbitrageable prices for the derivative asset which arise as the volatility paths vary in such a band can be described by a non linear PDE which we call the Black Scholes Barenblatt equation In this equation the pricing volatility is selected dynamically from the two extreme values min max according to the convexity of the value function A simple algorithm for solving the equation by nite di erencing or a trinomial tree is presented We show that this model captures the importance of diversi cation in managing derivatives positions It can be used systematically to construct e cient hedges using other derivatives in conjunction with the underlying asset y Courant Institute of Mathematical Sciences Mercer st New York NY Institute for Advanced Study Princeton NJ J P Morgan Securities New York NY The uncertain volatility model According to Arbitrage Pricing Theory if the market presents no arbitrage opportunities there exists a probability measure on future scenarios such that the price of any security is the expectation of its discounted cash ows Du e Such a probability is known as a mar tingale measure Harrison and Kreps or a pricing measure Determining the appropriate martingale measure associated with a sector of the security space e g the stock of a company and a riskless short term bond permits the valuation of any contingent claim based on these securities However pricing measures are often di cult to calculate precisely and there may exist more than one measure consistent with a given market It is useful to view the non uniqueness of pricing measures as re ecting the many choices for derivative asset prices that can exist in an uncertain economy For example option prices re ect the market s expectation about the future value of the underlying asset as well as its projection of future volatility Since this projection changes as the market reacts to new information implied volatility uctuates unpredictably In these circumstances fair option values and perfectly replicating hedges cannot be determined with certainty The existence of so called volatility risk in option trading is a concrete manifestation of market incompleteness This paper addresses the issue of derivative asset pricing and hedging in an uncertain future volatility environment For this purpose instead of choosing a pricing model that incorporates a complete view of the forward volatility as a single number or a predetermined function of time and price term structure of volatilities or even a stochastic process with given statistics we propose to operate under the less stringent assumption that that the volatility of future prices is restricted to lie in a bounded set but is otherwise undetermined For simplicity we restrict our discussion to derivative securities based on a single liquidly traded stock which pays no dividends over the contract s lifetime and assume a constant interest rate The basic assumption then reduces to postulating that under all admissible pricing mea sures future volatility paths will be restricted to lie within a band Accordingly we assume that the paths followed by future stock prices are It o processes viz dSt St t dZt t dt where t and t are non anticipative functions such that",
"title": ""
},
{
"docid": "9aefccc6fc6f628d374c1ffccfcc656a",
"text": "Keeping up with rapidly growing research fields, especially when there are multiple interdisciplinary sources, requires substantial effort for researchers, program managers, or venture capital investors. Current theories and tools are directed at finding a paper or website, not gaining an understanding of the key papers, authors, controversies, and hypotheses. This report presents an effort to integrate statistics, text analytics, and visualization in a multiple coordinated window environment that supports exploration. Our prototype system, Action Science Explorer (ASE), provides an environment for demonstrating principles of coordination and conducting iterative usability tests of them with interested and knowledgeable users. We developed an understanding of the value of reference management, statistics, citation context extraction, natural language summarization for single and multiple documents, filters to interactively select key papers, and network visualization to see citation patterns and identify clusters. The three-phase usability study guided our revisions to ASE and led us to improve the testing methods.",
"title": ""
}
] | scidocsrr |
68a78d56c63b1ba917d18b94fa7cee6c | A novel wavelet-SVM short-time passenger flow prediction in Beijing subway system | [
{
"docid": "4e29bdddbdeb5382347a3915dc7048de",
"text": "Accuracy and robustness with respect to missing or corrupt input data are two key characteristics for any travel time prediction model that is to be applied in a real-time environment (e.g. for display on variable message signs on freeways). This article proposes a freeway travel time prediction framework that exhibits both qualities. The framework exploits a recurrent neural network topology, the so-called statespace neural network (SSNN), with preprocessing strategies based on imputation. Although the SSNN model is a neural network, its design (in terms of inputand model selection) is not ‘‘black box’’ nor location-specific. Instead, it is based on the lay-out of the freeway stretch of interest. In this sense, the SSNN model combines the generality of neural network approaches, with traffic related (‘‘white-box’’) design. Robustness to missing data is tackled by means of simple imputation (data replacement) schemes, such as exponential forecasts and spatial interpolation. Although there are clear theoretical shortcomings to ‘‘simple’’ imputation schemes to remedy input failure, our results indicate that their use is justified in this particular application. The SSNN model appears to be robust to the ‘‘damage’’ done by these imputation schemes. This is true for both incidental (random) and structural input failure. We demonstrate that the SSNN travel time prediction framework yields good accurate and robust travel time predictions on both synthetic and real data. 2005 Elsevier Ltd. All rights reserved. 0968-090X/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.trc.2005.03.001 * Corresponding author. E-mail address: h.vanlint@citg.tudelft.nl (J.W.C. van Lint). 348 J.W.C. van Lint et al. / Transportation Research Part C 13 (2005) 347–369",
"title": ""
}
] | [
{
"docid": "8cd99d9b59e6f1b631767b57fb506619",
"text": "We describe origami programming methodology based on constraint functional logic programming. The basic operations of origami are reduced to solving systems of equations which describe the geometric properties of paper folds. We developed two software components: one that provides primitives to construct, manipulate and visualize paper folds and the other that solves the systems of equations. Using these components, we illustrate computer-supported origami construction and show the significance of the constraint functional logic programming paradigm in the program development.",
"title": ""
},
{
"docid": "2f7a0ab1c7a3ae17ef27d2aa639c39b4",
"text": "Evolutionary algorithms are commonly used to create high-performing strategies or agents for computer games. In this paper, we instead choose to evolve the racing tracks in a car racing game. An evolvable track representation is devised, and a multiobjective evolutionary algorithm maximises the entertainment value of the track relative to a particular human player. This requires a way to create accurate models of players' driving styles, as well as a tentative definition of when a racing track is fun, both of which are provided. We believe this approach opens up interesting new research questions and is potentially applicable to commercial racing games.",
"title": ""
},
{
"docid": "01f423d3fae351fa6c39821d0ec895e6",
"text": "Skeptics believe the Web is too unstructured for Web mining to succeed. Indeed, data mining has been applied traditionally to databases, yet much of the information on the Web lies buried in documents designed for human consumption such as home pages or product catalogs. Furthermore, much of the information on the Web is presented in natural-language text with no machine-readable semantics; HTML annotations structure the display of Web pages, but provide little insight into their content. Some have advocated transforming the Web into a massive layered database to facilitate data mining [12], but the Web is too dynamic and chaotic to be tamed in this manner. Others have attempted to hand code site-specific “wrappers” that facilitate the extraction of information from individual Web resources (e.g., [8]). Hand coding is convenient but cannot keep up with the explosive growth of the Web. As an alternative, this article argues for the structured Web hypothesis: Information on the Web is sufficiently structured to facilitate effective Web mining. Examples of Web structure include linguistic and typographic conventions, HTML annotations (e.g., <title>), classes of semi-structured documents (e.g., product catalogs), Web indices and directories, and much more. To support the structured Web hypothesis, this article will survey preliminary Web mining successes and suggest directions for future work. Web mining may be organized into the following subtasks:",
"title": ""
},
{
"docid": "cb1c65cb1e7959e52f3091da6103ff3a",
"text": "The Internet of Things paradigm originates from the proliferation of intelligent devices that can sense, compute and communicate data streams in a ubiquitous information and communication network. The great amounts of data coming from these devices introduce some challenges related to the storage and processing capabilities of the information. This strengthens the novel paradigm known as Big Data. In such a complex scenario, the Cloud computing is an efficient solution for the managing of sensor data. This paper presents Polluino, a system for monitoring the air pollution via Arduino. Moreover, a Cloud-based platform that manages data coming from air quality sensors is developed.",
"title": ""
},
{
"docid": "e244cedaac9812461142859fc87f3e52",
"text": "Krill herd (KH) has been proven to be an efficient algorithm for function optimization. For some complex functions, this algorithmmay have problems with convergence or being trapped in local minima. To cope with these issues, this paper presents an improved KH-based algorithm, called Opposition Krill Herd (OKH). The proposed approach utilizes opposition-based learning (OBL), position clamping (PC) and method while both PC and heavy-tailed CM help KH escape from local optima. Simulations are implemented on an array of benchmark functions and two engineering optimization problems. The results show that OKH has a good performance on majority of the considered functions and two engineering cases. The influence of each individual strategy (OBL, CM and PC) on KH is verified through 25 benchmarks. The results show that the KH with OBL, CM and PC operators, has the best performance among different variants of OKH. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6655b03c0fcc83a71a3119d7e526eedc",
"text": "Dynamic magnetic resonance imaging (MRI) scans can be accelerated by utilizing compressed sensing (CS) reconstruction methods that allow for diagnostic quality images to be generated from undersampled data. Unfortunately, CS reconstruction is time-consuming, requiring hours between a dynamic MRI scan and image availability for diagnosis. In this work, we train a convolutional neural network (CNN) to perform fast reconstruction of severely undersampled dynamic cardiac MRI data, and we explore the utility of CNNs for further accelerating dynamic MRI scan times. Compared to state-of-the-art CS reconstruction techniques, our CNN achieves reconstruction speeds that are 150x faster without significant loss of image quality. Additionally, preliminary results suggest that CNNs may allow scan times that are 2x faster than those allowed by CS.",
"title": ""
},
{
"docid": "8b675cc47b825268837a7a2b5a298dc9",
"text": "Artificial Intelligence chatbot is a technology that makes interaction between man and machine possible by using natural language. In this paper, we proposed an architectural design of a chatbot that will function as virtual diabetes physician/doctor. This chatbot will allow diabetic patients to have a diabetes control/management advice without the need to go to the hospital. A general history of a chatbot, a brief description of each chatbots is discussed. We proposed the design of a new technique that will be implemented in this chatbot as the key component to function as diabetes physician. Using this design, chatbot will remember the conversation path through parameter called Vpath. Vpath will allow chatbot to gives a response that is mostly suitable for the whole conversation as it specifically designed to be a virtual diabetes physician.",
"title": ""
},
{
"docid": "3c30209d29779153b4cb33d13d101cf8",
"text": "Acceptance-based interventions such as mindfulness-based stress reduction program and acceptance and commitment therapy are alternative therapies for cognitive behavioral therapy for treating chronic pain patients. To assess the effects of acceptance-based interventions on patients with chronic pain, we conducted a systematic review and meta-analysis of controlled and noncontrolled studies reporting effects on mental and physical health of pain patients. All studies were rated for quality. Primary outcome measures were pain intensity and depression. Secondary outcomes were anxiety, physical wellbeing, and quality of life. Twenty-two studies (9 randomized controlled studies, 5 clinical controlled studies [without randomization] and 8 noncontrolled studies) were included, totaling 1235 patients with chronic pain. An effect size on pain of 0.37 was found for the controlled studies. The effect on depression was 0.32. The quality of the studies was not found to moderate the effects of acceptance-based interventions. The results suggest that at present mindfulness-based stress reduction program and acceptance and commitment therapy are not superior to cognitive behavioral therapy but can be good alternatives. More high-quality studies are needed. It is recommended to focus on therapies that integrate mindfulness and behavioral therapy. Acceptance-based therapies have small to medium effects on physical and mental health in chronic pain patients. These effects are comparable to those of cognitive behavioral therapy.",
"title": ""
},
{
"docid": "7cef2fac422d9fc3c3ffbc130831b522",
"text": "Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , \" (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated.",
"title": ""
},
{
"docid": "9a1505d126d1120ffa8d9670c71cb076",
"text": "A relevant knowledge [24] (and consequently research area) is the study of software lifecycle process models (PM-SDLCs). Such process models have been defined in three abstraction levels: (i) full organizational software lifecycles process models (e.g. ISO 12207, ISO 15504, CMMI/SW); (ii) lifecycles frameworks models (e.g. waterfall, spiral, RAD, and others) and (iii) detailed software development life cycles process (e.g. unified process, TSP, MBASE, and others). This paper focuses on (ii) and (iii) levels and reports the results of a descriptive/comparative study of 13 PM-SDLCs that permits a plausible explanation of their evolution in terms of common, distinctive, and unique elements as well as of the specification rigor and agility attributes. For it, a conceptual research approach and a software process lifecycle meta-model are used. Findings from the conceptual analysis are reported. Paper ends with the description of research limitations and recommendations for further research.",
"title": ""
},
{
"docid": "bc6a6cf11881326360387cbed997dcf1",
"text": "The explanation of heterogeneous multivariate time series data is a central problem in many applications. The problem requires two major data mining challenges to be addressed simultaneously: Learning models that are humaninterpretable and mining of heterogeneous multivariate time series data. The intersection of these two areas is not adequately explored in the existing literature. To address this gap, we propose grammar-based decision trees and an algorithm for learning them. Grammar-based decision tree extends decision trees with a grammar framework. Logical expressions, derived from context-free grammar, are used for branching in place of simple thresholds on attributes. The added expressivity enables support for a wide range of data types while retaining the interpretability of decision trees. By choosing a grammar based on temporal logic, we show that grammar-based decision trees can be used for the interpretable classification of high-dimensional and heterogeneous time series data. In addition to classification, we show how grammar-based decision trees can also be used for categorization, which is a combination of clustering and generating interpretable explanations for each cluster. We apply grammar-based decision trees to analyze the classic Australian Sign Language dataset as well as categorize and explain near midair collisions to support the development of a prototype aircraft collision avoidance system.",
"title": ""
},
{
"docid": "88130a65e625f85e527d63a0d2a446d4",
"text": "Test-Driven Development (TDD) is an agile practice that is widely accepted and advocated by most agile methods and methodologists. In this paper, we report on a longitudinal case study of an IBM team who has sustained use of TDD for five years and over ten releases of a Java-implemented product. The team worked from a design and wrote tests incrementally before or while they wrote code and, in the process, developed a significant asset of automated tests. The IBM team realized sustained quality improvement relative to a pre-TDD project and consistently had defect density below industry standards. As a result, our data indicate that the TDD practice can aid in the production of high quality products. This quality improvement would compensate for the moderate perceived productivity losses. Additionally, the use of TDD may decrease the degree to which code complexity increases as software ages.",
"title": ""
},
{
"docid": "b5b8ae3b7b307810e1fe39630bc96937",
"text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.",
"title": ""
},
{
"docid": "4a8e78ff046070b14a53f6cd0737dd32",
"text": "This study aims to gain insights into emerging research fields in the area of marketing and tourism. It provides support for the use of quantitative techniques to facilitate content analysis. The authors present a longitudinal latent semantic analysis of keywords. The proposed method is illustrated by two different examples: a scholarly journal (International Marketing Review) and conference proceedings (ENTER eTourism Conference). The methodology reveals an understanding of the current state of the art of marketing research and e-tourism by identifying neglected, popular or upcoming thematic research foci. The outcomes are compared with former results generated by traditional content analysis techniques. Findings confirm that the proposed methodology has the potential to complement qualitative content analysis, as the semantic analysis produces similar outcomes to qualitative content analysis to some extent. This paper reviews a journal’s content over a period of nearly three decades. The authors argue that the suggested methodology facilitates the analysis dramatically and can thus be simply applied on a regular basis in order to monitor topic development within a specific research domain.",
"title": ""
},
{
"docid": "204ecea0d8b6c572cd1a5d20b5e267a9",
"text": "Nowadays it is very common for people to write online reviews of products they have purchased. These reviews are a very important source of information for the potential customers before deciding to purchase a product. Consequently, websites containing customer reviews are becoming targets of opinion spam. -- undeserving positive or negative reviews; reviews that reviewers never use the product, but is written with an agenda in mind. This paper aims to detect spam reviews by users. Characteristics of the review will be identified based on previous research, plus a new feature -- rating consistency check. The goal is to devise a tool to evaluate the product reviews and detect product review spams. The approach is based on multiple criteria: checking unusual review vs. rating patterns, links or advertisements, detecting questions and comparative reviews. We tested our system on a couple of sets of data and find that we are able to detect these factors effectively.",
"title": ""
},
{
"docid": "b1167c4321d3235974bc6171d6c062bb",
"text": "Thousands of malicious applications targeting mobile devices, including the popular Android platform, are created every day. A large number of those applications are created by a small number of professional underground actors, however previous studies overlooked such information as a feature in detecting and classifying malware, and in attributing malware to creators. Guided by this insight, we propose a method to improve on the performance of Android malware detection by incorporating the creator’s information as a feature and classify malicious applications into similar groups. We developed a system called AndroTracker that implements this method in practice. AndroTracker enables fast detection of malware by using creator information such as serial number of certificate. Additionally, it analyzes malicious behaviors and permissions to increase detection accuracy. AndroTracker also can classify malware based on similarity scoring. Finally, AndroTracker shows detection and classification performance with 99% and 90% accuracy respectively.",
"title": ""
},
{
"docid": "dd9d776dbc470945154d460921005204",
"text": "The Ant Colony System (ACS) is, next to Ant Colony Optimization (ACO) and the MAX-MIN Ant System (MMAS), one of the most efficient metaheuristic algorithms inspired by the behavior of ants. In this article we present three novel parallel versions of the ACS for the graphics processing units (GPUs). To the best of our knowledge, this is the first such work on the ACS which shares many key elements of the ACO and the MMAS, but differences in the process of building solutions and updating the pheromone trails make obtaining an efficient parallel version for the GPUs a difficult task. The proposed parallel versions of the ACS differ mainly in their implementations of the pheromone memory. The first two use the standard pheromone matrix, and the third uses a novel selective pheromone memory. Computational experiments conducted on several Travelling Salesman Problem (TSP) instances of sizes ranging from 198 to 2392 cities showed that the parallel ACS on Nvidia Kepler GK104 GPU (1536 CUDA cores) is able to obtain a speedup up to 24.29x vs the sequential ACS running on a single core of Intel Xeon E5-2670 CPU. The parallel ACS with the selective pheromone memory achieved speedups up to 16.85x, but in most cases the obtained solutions were of significantly better quality than for the sequential ACS.",
"title": ""
},
{
"docid": "493eb0d5e4f9db288de9abd7ab172a2d",
"text": "To reveal and leverage the correlated and complemental information between different views, a great amount of multi-view learning algorithms have been proposed in recent years. However, unsupervised feature selection in multiview learning is still a challenge due to lack of data labels that could be utilized to select the discriminative features. Moreover, most of the traditional feature selection methods are developed for the single-view data, and are not directly applicable to the multi-view data. Therefore, we propose an unsupervised learning method called Adaptive Unsupervised Multi-view Feature Selection (AUMFS) in this paper. AUMFS attempts to jointly utilize three kinds of vital information, i.e., data cluster structure, data similarity and the correlations between different views, contained in the original data together for feature selection. To achieve this goal, a robust sparse regression model with the l2,1-norm penalty is introduced to predict data cluster labels, and at the same time, multiple view-dependent visual similar graphs are constructed to flexibly model the visual similarity in each view. Then, AUMFS integrates data cluster labels prediction and adaptive multi-view visual similar graph learning into a unified framework. To solve the objective function of AUMFS, a simple yet efficient iterative method is proposed. We apply AUMFS to three visual concept recognition applications (i.e., social image concept recognition, object recognition and video-based human action recognition) on four benchmark datasets. Experimental results show the proposed method significantly outperforms several state-of-the-art feature selection methods. More importantly, our method is not very sensitive to the parameters and the optimization method converges very fast.",
"title": ""
},
{
"docid": "fcf894fdaec96bd826ec3c5eb31be707",
"text": "In future defence scenarios directed energy weapons are of increasing interest. Therefore national and international R&D programs are increasing their activities on laser and high power microwave technologies in the defence and anti terror areas. The paper gives an overview of the German R&D programmes on directed energy weapons. A solid state medium energy weapon laser (MEL) is investigated at Rheinmetall for i.e. anti air defence applications up to distances of about 7 km. Due to the small volume these Lasers can be integrated as a secondary weapon system into mobile platforms such as AECVs. The beam power of a MEL is between 1 kW and 100 kW. The electric energy per pulse is in the kJ range. A burst of only a few pulses is needed to destroy optronics of targets in a distance up to 7 km. The electric energy requirements of a MEL system are low. High energy density pulsed power technologies are already available for the integration into a medium sized vehicle. The paper gives an overview on the MEL technologies which are under investigation in order to introduce a technology demonstrator at the end of 2005. The electric requirements at the interface to the power bus of a vehicle are presented. Finally an integration concept as a secondary weapon in a medium sized vehicle is given and discussed. In close cooperation with Diehl Munitionssysteme high power microwave technologies are investigated. Different kinds of HPM Sources are under development for defence and anti terror applications. It is the goal to introduce first prototype systems within a short time frame. The paper gives a brief overview on the different source technologies currently under investigation. The joint program concentrates on ultra wide band and damped sinus HPM waveforms in single shot and repetitive operation. Radiation powers up to the Gigawatt range are realized up to now. By presenting some characteristic scenarios for those HPM systems the wide range of applications is proven in the paper.",
"title": ""
},
{
"docid": "92b20ec581fc5609da2908f9f0f74a33",
"text": "We address the problem of using external rotation information with uncalibrated video sequences. The main problem addressed is, what is the benefit of the orientation information for camera calibration? It is shown that in case of a rotating camera the camera calibration problem is linear even in the case that all intrinsic parameters vary. For arbitrarily moving cameras the calibration problem is also linear but underdetermined for the general case of varying all intrinsic parameters. However, if certain constraints are applied to the intrinsic parameters the camera calibration can be computed linearily. It is analyzed which constraints are needed for camera calibration of freely moving cameras. Furthermore we address the problem of aligning the camera data with the rotation sensor data in time. We give an approach to align these data in case of a rotating camera.",
"title": ""
}
] | scidocsrr |
3cf3840371b5e9515a49b1c4f17bd44e | ICT Governance: A Reference Framework | [
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] | [
{
"docid": "33a9c1b32f211ea13a70b1ce577b71dc",
"text": "In this work, we propose a face recognition library, with the objective of lowering the implementation complexity of face recognition features on applications in general. The library is based on Convolutional Neural Networks; a special kind of Neural Network specialized for image data. We present the main motivations for the use of face recognition, as well as the main interface for using the library features. We describe the overall architecture structure of the library and evaluated it on a large scale scenario. The proposed library achieved an accuracy of 98.14% when using a required confidence of 90%, and an accuracy of 99.86% otherwise. Keywords—Artificial Intelligence, CNNs, Face Recognition, Image Recognition, Machine Learning, Neural Networks.",
"title": ""
},
{
"docid": "1876319faa49a402ded2af46a9fcd966",
"text": "One, and two, and three police persons spring out of the shadows Down the corner comes one more And we scream into that city night: \" three plus one makes four! \" Well, they seem to think we're disturbing the peace But we won't let them make us sad 'Cause kids like you and me baby, we were born to add Born To Add, Sesame Street (sung to the tune of Bruce Springsteen's Born to Run) to Ursula Preface In October 1996, I got a position as a research assistant working on the Twenty-One project. The project aimed at providing a software architecture that supports a multilingual community of people working on local Agenda 21 initiatives in exchanging ideas and publishing their work. Local Agenda 21 initiatives are projects of local governments, aiming at sustainable processes in environmental , human, and economic terms. The projects cover themes like combating poverty, protecting the atmosphere, human health, freshwater resources, waste management, education, etc. Documentation on local Agenda 21 initiatives are usually written in the language of the local government, very much unlike documentation on research in e.g. information retrieval for which English is the language of international communication. Automatic cross-language retrieval systems are therefore a helpful tool in the international cooperation between local governments. Looking back, I regret not being more involved in the non-technical aspects of the Twenty-One project. To make up for this loss, many of the examples in this thesis are taken from the project's domain. Working on the Twenty-One project convinced me that solutions to cross-language information retrieval should explicitly combine translation models and retrieval models into one unifying framework. Working in a language technology group, the use of language models seemed a natural choice. A choice that simplifies things considerably for that matter. The use of language models for information retrieval practically reduces ranking to simply adding the occurrences of terms: complex weighting algorithms are no longer needed. \" Born to add \" is therefore the motto of this thesis. By adding out loud, it hopefully annoys-no offence, and with all due respect-some of the well-established information retrieval approaches, like Bruce Stringbean and The Sesame Street Band annoys the Sesame Street police. Acknowledgements The research presented in this thesis is funded in part by the European Union projects Twenty-One, Pop-Eye and Olive, and the Telematics Institute project Druid. I am most grateful to Wessel Kraaij of TNO-TPD …",
"title": ""
},
{
"docid": "8e6efa696b960cf08cf1616efc123cbd",
"text": "SLAM (Simultaneous Localization and Mapping) for underwater vehicles is a challenging research topic due to the limitations of underwater localization sensors and error accumulation over long-term operations. Furthermore, acoustic sensors for mapping often provide noisy and distorted images or low-resolution ranging, while video images provide highly detailed images but are often limited due to turbidity and lighting. This paper presents a review of the approaches used in state-of-the-art SLAM techniques: Extended Kalman Filter SLAM (EKF-SLAM), FastSLAM, GraphSLAM and its application in underwater environments.",
"title": ""
},
{
"docid": "e6d4d23df1e6d21bd988ca462526fe15",
"text": "Reinforcement learning, driven by reward, addresses tasks by optimizing policies for expected return. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, so we argue that reward alone is a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pre-training improves the data efficiency and policy returns of end-to-end reinforcement learning.",
"title": ""
},
{
"docid": "d58425a613f9daea2677d37d007f640e",
"text": "Recently the improved bag of features (BoF) model with locality-constrained linear coding (LLC) and spatial pyramid matching (SPM) achieved state-of-the-art performance in image classification. However, only adopting SPM to exploit spatial information is not enough for satisfactory performance. In this paper, we use hierarchical temporal memory (HTM) cortical learning algorithms to extend this LLC & SPM based model. HTM regions consist of HTM cells are constructed to spatial pool the LLC codes. Each cell receives a subset of LLC codes, and adjacent subsets are overlapped so that more spatial information can be captured. Additionally, HTM cortical learning algorithms have two processes: learning phase which make the HTM cell only receive most frequent LLC codes, and inhibition phase which ensure that the output of HTM regions is sparse. The experimental results on Caltech 101 and UIUC-Sport dataset show the improvement on the original LLC & SPM based model.",
"title": ""
},
{
"docid": "ab2c0a23ed71295ee4aa51baf9209639",
"text": "An expert system to diagnose the main childhood diseases among the tweens is proposed. The diagnosis is made taking into account the symptoms that can be seen or felt. The childhood diseases have many common symptoms and some of them are very much alike. This creates many difficulties for the doctor to reach at a right decision or diagnosis. The proposed system can remove these difficulties and it is having knowledge of many childhood diseases. The proposed expert system is implemented using SWI-Prolog.",
"title": ""
},
{
"docid": "263ac34590609435b2a104a385f296ca",
"text": "Efficient computation of curvature-based energies is important for practical implementations of geometric modeling and physical simulation applications. Building on a simple geometric observation, we provide a version of a curvature-based energy expressed in terms of the Laplace operator acting on the embedding of the surface. The corresponding energy--being quadratic in positions--gives rise to a constant Hessian in the context of isometric deformations. The resulting isometric bending model is shown to significantly speed up common cloth solvers, and when applied to geometric modeling situations built onWillmore flow to provide runtimes which are close to interactive rates.",
"title": ""
},
{
"docid": "d82c11c5a6981f1d3496e0838519704d",
"text": "This paper presents a detailed study of the nonuniform bipolar conduction phenomenon under electrostatic discharge (ESD) events in single-finger NMOS transistors and analyzes its implications for the design of ESD protection for deep-submicron CMOS technologies. It is shown that the uniformity of the bipolar current distribution under ESD conditions is severely degraded depending on device finger width ( ) and significantly influenced by the substrate and gate-bias conditions as well. This nonuniform current distribution is identified as a root cause of the severe reduction in ESD failure threshold current for the devices with advanced silicided processes. Additionally, the concept of an intrinsic second breakdown triggering current ( 2 ) is introduced, which is substrate-bias independent and represents the maximum achievable ESD failure strength for a given technology. With this improved understanding of ESD behavior involved in advanced devices, an efficient design window can be constructed for robust deep submicron ESD protection.",
"title": ""
},
{
"docid": "89513d2cf137e60bf7f341362de2ba84",
"text": "In this paper, we present a visual analytics approach that provides decision makers with a proactive and predictive environment in order to assist them in making effective resource allocation and deployment decisions. The challenges involved with such predictive analytics processes include end-users' understanding, and the application of the underlying statistical algorithms at the right spatiotemporal granularity levels so that good prediction estimates can be established. In our approach, we provide analysts with a suite of natural scale templates and methods that enable them to focus and drill down to appropriate geospatial and temporal resolution levels. Our forecasting technique is based on the Seasonal Trend decomposition based on Loess (STL) method, which we apply in a spatiotemporal visual analytics context to provide analysts with predicted levels of future activity. We also present a novel kernel density estimation technique we have developed, in which the prediction process is influenced by the spatial correlation of recent incidents at nearby locations. We demonstrate our techniques by applying our methodology to Criminal, Traffic and Civil (CTC) incident datasets.",
"title": ""
},
{
"docid": "26abfdd9af796a2903b0f7cef235b3b4",
"text": "Argumentation mining is an advanced form of human language understanding by the machine. This is a challenging task for a machine. When sufficient explicit discourse markers are present in the language utterances, the argumentation can be interpreted by the machine with an acceptable degree of accuracy. However, in many real settings, the mining task is difficult due to the lack or ambiguity of the discourse markers, and the fact that a substantial amount of knowledge needed for the correct recognition of the argumentation, its composing elements and their relationships is not explicitly present in the text, but makes up the background knowledge that humans possess when interpreting language. In this article1 we focus on how the machine can automatically acquire the needed common sense and world knowledge. As very few research has been done in this respect, many of the ideas proposed in this article are tentative, but start being researched. We give an overview of the latest methods for human language understanding that map language to a formal knowledge representation that facilitates other tasks (for instance, a representation that is used to visualize the argumentation or that is easily shared in a decision or argumentation support system). Most current systems are trained on texts that are manually annotated. Then we go deeper into the new field of representation learning that nowadays is very much studied in computational linguistics. This field investigates methods for representing language as statistical concepts or as vectors, allowing straightforward methods of compositionality. The methods often use deep learning and its underlying neural network technologies to learn concepts from large text collections in an unsupervised way (i.e., without the need for manual annotations). We show how these methods can help the argumentation mining process, but also demonstrate that these methods need further research to automatically acquire the necessary background knowledge and more specifically common sense and world knowledge. We propose a number of ways to improve the learning of common sense and world knowledge by exploiting textual and visual data, and touch upon how we can integrate the learned knowledge in the argumentation mining process.",
"title": ""
},
{
"docid": "c049f188b31bbc482e16d22a8061abfa",
"text": "SDN deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful deployments. In this paper we measure, report, and explain the performance characteristics of flow table updates in three hardware OpenFlow switches. Our results can help controller developers to make their programs efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.",
"title": ""
},
{
"docid": "2bf2e36bbbbdd9e091395636fcc2a729",
"text": "An open-source framework for real-time structured light is presented. It is called “SLStudio”, and enables real-time capture of metric depth images. The framework is modular, and extensible to support new algorithms for scene encoding/decoding, triangulation, and aquisition hardware. It is the aim that this software makes real-time 3D scene capture more widely accessible and serves as a foundation for new structured light scanners operating in real-time, e.g. 20 depth images per second and more. The use cases for such scanners are plentyfull, however due to the computational constraints, all public implementations so far are limited to offline processing. With “SLStudio”, we are making a platform available which enables researchers from many different fields to build application specific real time 3D scanners. The software is hosted at http://compute.dtu.dk/~jakw/slstudio.",
"title": ""
},
{
"docid": "6830ca98632f86ef2a0cb4c19183d9b4",
"text": "In success or failure of any firm/industry or organization employees plays the most vital and important role. Airline industry is one of service industry the job of which is to sell seats to their travelers/costumers and passengers; hence employees inspiration towards their work plays a vital part in serving client’s requirements. This research focused on the influence of employee’s enthusiasm and its apparatuses e.g. pay and benefits, working atmosphere, vision of organization towards customer satisfaction and management systems in Pakistani airline industry. For analysis correlation and regression methods were used. Results of the research highlighted that workers motivation and its four major components e.g. pay and benefits, working atmosphere, vision of organization and management systems have a significant positive impact on customer’s gratification. Those employees of the industry who directly interact with client highly impact the client satisfaction level. It is obvious from results of this research that pay and benefits performs a key role in employee’s motivation towards achieving their organizational objectives of greater customer satisfaction.",
"title": ""
},
{
"docid": "b27038accdabab12d8e0869aba20a083",
"text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.",
"title": ""
},
{
"docid": "6daa1bc00a4701a2782c1d5f82c518e2",
"text": "An 8-year-old Caucasian girl was referred with perineal bleeding of sudden onset during micturition. There was no history of trauma, fever or dysuria, but she had a history of constipation. Family history was unremarkable. Physical examination showed a prepubertal girl with a red ‘doughnut’-shaped lesion surrounding the urethral meatus (figure 1). Laboratory findings, including platelet count and coagulation, were normal. A vaginoscopy, performed using sedation, was negative. Swabs tested negative for sexually transmitted pathogens. A diagnosis of urethral prolapse (UP) was made on clinical appearance. Treatment with topical oestrogen cream was started and constipation treated with oral polyethylene glycol. On day 10, the bleeding stopped, and at week 5 there was a moderate regression of the UP. However, occasional mild bleeding persisted at 10 months, so she was referred to a urologist (figure 2). UP is an eversion of the distal urethral mucosa through the external meatus. It is most commonly seen in postmenopausal women and is uncommon in prepubertal girls. UP is rare in Caucasian children and more common in patients of African descent. 2 It may be asymptomatic or present with bleeding, spotting or urinary symptoms. The exact pathophysiological process of UP is unknown. Increased intra-abdominal pressure with straining, inadequate periurethral supporting tissue, neuromuscular dysfunction and a relative oestrogen deficiency are possible predisposing factors. Differential diagnoses include ureterocele, polyps, tumours and non-accidental injury. 3 Management options include conservative treatments such as tepid water baths and topical oestrogens. Surgery is indicated if bleeding, dysuria or pain persist. 5 Vaginoscopy in this case was possibly unnecessary, as there were no signs of trauma to the perineal area or other concerning signs or history of abuse. In the presence of typical UP, invasive diagnostic procedures should not be considered as first-line investigations and they should be reserved for cases of diagnostic uncertainty.",
"title": ""
},
{
"docid": "5deae44a9c14600b1a2460836ed9572d",
"text": "Grasping an object in a cluttered, unorganized environment is challenging because of unavoidable contacts and interactions between the robot and multiple immovable (static) and movable (dynamic) obstacles in the environment. Planning an approach trajectory for grasping in such situations can benefit from physics-based simulations that describe the dynamics of the interaction between the robot manipulator and the environment. In this work, we present a physics-based trajectory optimization approach for planning grasp approach trajectories. We present novel cost objectives and identify failure modes relevant to grasping in cluttered environments. Our approach uses rollouts of physics-based simulations to compute the gradient of the objective and of the dynamics. Our approach naturally generates behaviors such as choosing to push objects that are less likely to topple over, recognizing and avoiding situations which might cause a cascade of objects to fall over, and adjusting the manipulator trajectory to push objects aside in a direction orthogonal to the grasping direction. We present results in simulation for grasping in a variety of cluttered environments with varying levels of density of obstacles in the environment. Our experiments in simulation indicate that our approach outperforms a baseline approach that considers multiple straight-line trajectories modified to account for static obstacles by an aggregate success rate of 14% with varying degrees of object clutter.",
"title": ""
},
{
"docid": "68a5192778ae203ea1e31ba4e29b4330",
"text": "Mobile crowdsensing is becoming a vital technique for environment monitoring, infrastructure management, and social computing. However, deploying mobile crowdsensing applications in large-scale environments is not a trivial task. It creates a tremendous burden on application developers as well as mobile users. In this paper we try to reveal the barriers hampering the scale-up of mobile crowdsensing applications, and to offer our initial thoughts on the potential solutions to lowering the barriers.",
"title": ""
},
{
"docid": "14a90781132fa3932d41b21b382ba362",
"text": "In this paper, a prevalent type of zero-voltage- transition bidirectional converters is analyzed with the inclusion of the reverse recovery effect of the diodes. The main drawback of this type is missing the soft-switching condition of the main switches at operating duty cycles smaller than 0.5. As a result, soft-switching condition would be lost in one of the bidirectional converter operating modes (forward or reverse modes) since the duty cycles of the forward and reverse modes are complement of each other. Analysis shows that the rectifying diode reverse recovery would assist in providing the soft-switching condition for the duty cycles below 0.5, which is done by a proper design of the snubber capacitor and with no limitation on the rectifying diode current rate at turn-off. Hence, the problems associated with the soft-switching range and the reverse recovery of the rectifying diode are solved simultaneously, and soft-switching condition for both operating modes of the bidirectional converter is achieved with no extra auxiliary components and no complex control. The theoretical analysis for a bidirectional buck and boost converter is presented in detail, and the validity of the theoretical analysis is justified using the experimental results of a 250-W 135- to 200-V prototype converter.",
"title": ""
},
{
"docid": "67fb91119ba2464e883616ffd324f864",
"text": "Significant improvements in automobile suspension performance are achieved by active systems. However, current active suspension systems are too expensive and complex. Developments occurring in power electronics, permanent magnet materials, and microelectronic systems justifies analysis of the possibility of implementing electromagnetic actuators in order to improve the performance of automobile suspension systems without excessively increasing complexity and cost. In this paper, the layouts of hydraulic and electromagnetic active suspensions are compared. The actuator requirements are calculated, and some experimental results proving that electromagnetic suspension could become a reality in the future are shown.",
"title": ""
},
{
"docid": "e5c625ceaf78c66c2bfb9562970c09ec",
"text": "A continuing question in neural net research is the size of network needed to solve a particular problem. If training is started with too small a network for the problem no learning can occur. The researcher must then go through a slow process of deciding that no learning is taking place, increasing the size of the network and training again. If a network that is larger than required is used, then processing is slowed, particularly on a conventional von Neumann computer. An approach to this problem is discussed that is based on learning with a net which is larger than the minimum size network required to solve the problem and then pruning the solution network. The result is a small, efficient network that performs as well or better than the original which does not give a complete answer to the question, since the size of the initial network is still largely based on guesswork but it gives a very useful partial answer and sheds some light on the workings of a neural network in the process.<<ETX>>",
"title": ""
}
] | scidocsrr |
15f0cd4c5f3d7a9b4c9c3073d1530b75 | Mental health morbidity among people subject to immigration detention in the UK: a feasibility study. | [
{
"docid": "2c87c9977991239e475e33151117a9df",
"text": "BACKGROUND\nThe number of asylum seekers, refugees and internally displaced people worldwide is rising. Western countries are using increasingly restrictive policies, including the detention of asylum seekers, and there is concern that this is harmful.\n\n\nAIMS\nTo investigate mental health outcomes among adult, child and adolescent immigration detainees.\n\n\nMETHOD\nA systematic review was conducted of studies investigating the impact of immigration detention on the mental health of children, adolescents and adults, identified by a systematic search of databases and a supplementary manual search of references.\n\n\nRESULTS\nTen studies were identified. All reported high levels of mental health problems in detainees. Anxiety, depression and post-traumatic stress disorder were commonly reported, as were self-harm and suicidal ideation. Time in detention was positively associated with severity of distress. There is evidence for an initial improvement in mental health occurring subsequent to release, although longitudinal results have shown that the negative impact of detention persists.\n\n\nCONCLUSIONS\nThis area of research is in its infancy and studies are limited by methodological constraints. Findings consistently report high levels of mental health problems among detainees. There is some evidence to suggest an independent adverse effect of detention on mental health.",
"title": ""
}
] | [
{
"docid": "ceaa09643c64ef16218fcd7a91f66edc",
"text": "Opinion spam, intentionally written by spammers who do not have actual experience with services or products, has recently become a factor that undermines the credibility of information online. In recent years, studies have attempted to detect opinion spam using machine learning algorithms. However, limitations of goldstandard spam datasets still prove to be a major obstacle in opinion spam research. In this paper, we introduce a novel dataset called Paraphrased OPinion Spam (POPS), which contains a new type of review spam that imitates real human opinions using crowdsourcing. To create such a seemingly truthful review spam dataset, we asked task participants to paraphrase truthful reviews, and include factual information and domain knowledge in their reviews. The classification experiments and semantic analysis results show that our POPS dataset most linguistically and semantically resembles truthful reviews. We believe that our new deceptive opinion spam dataset will help advance opinion spam research.",
"title": ""
},
{
"docid": "9f3388eb88e230a9283feb83e4c623e1",
"text": "Entity Linking (EL) is an essential task for semantic text understanding and information extraction. Popular methods separately address the Mention Detection (MD) and Entity Disambiguation (ED) stages of EL, without leveraging their mutual dependency. We here propose the first neural end-to-end EL system that jointly discovers and links entities in a text document. The main idea is to consider all possible spans as potential mentions and learn contextual similarity scores over their entity candidates that are useful for both MD and ED decisions. Key components are context-aware mention embeddings, entity embeddings and a probabilistic mention entity map, without demanding other engineered features. Empirically, we show that our end-to-end method significantly outperforms popular systems on the Gerbil platform when enough training data is available. Conversely, if testing datasets follow different annotation conventions compared to the training set (e.g. queries/ tweets vs news documents), our ED model coupled with a traditional NER system offers the best or second best EL accuracy.",
"title": ""
},
{
"docid": "a9ad415524996446ea1204ad5ff11d89",
"text": "Crime against women is increasing at an alarming rate in almost all parts of India. Women in the Indian society have been victims of humiliation, torture and exploitation. It has even existed in the past but only in the recent years the issues have been brought to the open for concern. According to the latest data released by the National Crime Records Bureau (NCRB), crime against women have increased more than doubled over the past ten years. While a number of analyses have been done in the field of crime pattern detection, none have done an extensive study on the crime against women in India. The present paper describes a behavioural analysis of crime against women in India from the year 2001 to 2014. The study evaluates the efficacy of Infomap clustering algorithm for detecting communities of states and union territories in India based on crimes. As it is a graph based clustering approach, all the states of India along with the union territories have been considered as nodes of the graph and similarity among the nodes have been measured based on different types of crimes. Each community is a group of states and / or union territories which are similar based on crime trends. Initially, the method finds the communities based on current year crime data, subsequently at the end of a year when new crime data for the next year is available, the graph is modified and new communities are formed. The process is repeated year wise that helps to predict how crime against women has significantly increased in various states of India over the past years. It also helps in rapid visualisation and identification of states which are densely affected with crimes. This approach proves to be quite effective and can also be used for analysing the global crime scenario.",
"title": ""
},
{
"docid": "66f46290a9194d4e982b8d1b59a73090",
"text": "Sensor to body calibration is a key requirement for capturing accurate body movements in applications based on wearable systems. In this paper, we consider the specific problem of estimating the positions of multiple inertial measurement units (IMUs) relative to the adjacent body joints. To derive an efficient, robust and precise method based on a practical procedure is a crucial as well as challenging task when developing a wearable system with multiple embedded IMUs. In this work, first, we perform a theoretical analysis of an existing position calibration method, showing its limited applicability for the hip and knee joint. Based on this, we propose a method for simultaneously estimating the positions of three IMUs (mounted on pelvis, upper leg, lower leg) relative to these joints. The latter are here considered as an ensemble. Finally, we perform an experimental evaluation based on simulated and real data, showing the improvements of our calibration method as well as lines of future work.",
"title": ""
},
{
"docid": "c56c71775a0c87f7bb6c59d6607e5280",
"text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.",
"title": ""
},
{
"docid": "b95190b1139935bdc40634fe0650a51c",
"text": "Much of recent research has been devoted to video prediction and generation, yet most of the previous works have demonstrated only limited success in generating videos on short-term horizons. The hierarchical video prediction method by Villegas et al. (2017b) is an example of a state-of-the-art method for long-term video prediction, but their method is limited because it requires ground truth annotation of high-level structures (e.g., human joint landmarks) at training time. Our network encodes the input frame, predicts a high-level encoding into the future, and then a decoder with access to the first frame produces the predicted image from the predicted encoding. The decoder also produces a mask that outlines the predicted foreground object (e.g., person) as a by-product. Unlike Villegas et al. (2017b), we develop a novel training method that jointly trains the encoder, the predictor, and the decoder together without highlevel supervision; we further improve upon this by using an adversarial loss in the feature space to train the predictor. Our method can predict about 20 seconds into the future and provides better results compared to Denton and Fergus (2018) and Finn et al. (2016) on the Human 3.6M dataset.",
"title": ""
},
{
"docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd",
"text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.",
"title": ""
},
{
"docid": "ceb270c07d26caec5bc20e7117690f9f",
"text": "Pesticides including insecticides and miticides are primarily used to regulate arthropod (insect and mite) pest populations in agricultural and horticultural crop production systems. However, continual reliance on pesticides may eventually result in a number of potential ecological problems including resistance, secondary pest outbreaks, and/or target pest resurgence [1,2]. Therefore, implementation of alternative management strategies is justified in order to preserve existing pesticides and produce crops with minimal damage from arthropod pests. One option that has gained interest by producers is integrating pesticides with biological control agents or natural enemies including parasitoids and predators [3]. This is often referred to as ‘compatibility,’ which is the ability to integrate or combine natural enemies with pesticides so as to regulate arthropod pest populations without directly or indirectly affecting the life history parameters or population dynamics of natural enemies [2,4]. This may also refer to pesticides being effective against targeted arthropod pests but relatively non-harmful to natural enemies [5,6].",
"title": ""
},
{
"docid": "c3fda89c22e17144b3046bb4639d6d7a",
"text": "Since 1990s Honeybee Robotics has been developing and testing surface coring drills for future planetary missions. Recently, we focused on developing a rotary-percussive core drill for the 2018 Mars Sample Return mission and in particular for the Mars Astrobiology Explorer-Cacher, MAX-C mission. The goal of the 2018 MAX-C mission is to acquire approximately 20 cores from various rocks and outcrops on the surface of Mars. The acquired cores, 1 cm diameter and 5 cm long, would be cached for return back to Earth either in 2022 or 2024, depending which of the MSR architectures is selected. We built a testbed coring drill that was used to acquire drilling data, such as power, rate of penetration, and Weight on Bit, in various rock formations. Based on these drilling data we designed a prototype Mars Sample Return coring drill. The proposed MSR drill is an arm-mounted, standalone device, requiring no additional arm actuation once positioned and preloaded. A low mass, compact transmission internal to the housing provides all of the actuation of the tool mechanisms. The drill uses a rotary-percussive drilling approach and can acquire a 1 cm diameter and 5 cm long core in Saddleback basalt in less than 30 minutes with only ∼20 N Weight on Bit and less than 100 Watt of power. The prototype MSR drill weighs approximately 5 kg1,2.",
"title": ""
},
{
"docid": "c18aad29529e40220bc519472be10988",
"text": "Informative and discriminative feature descriptors play a fundamental role in deformable shape analysis. For example, they have been successfully employed in correspondence, registration, and retrieval tasks. In recent years, significant attention has been devoted to descriptors obtained from the spectral decomposition of the Laplace-Beltrami operator associated with the shape. Notable examples in this family are the heat kernel signature (HKS) and the recently introduced wave kernel signature (WKS). The Laplacian-based descriptors achieve state-of-the-art performance in numerous shape analysis tasks; they are computationally efficient, isometry-invariant by construction, and can gracefully cope with a variety of transformations. In this paper, we formulate a generic family of parametric spectral descriptors. We argue that to be optimized for a specific task, the descriptor should take into account the statistics of the corpus of shapes to which it is applied (the \"signal\") and those of the class of transformations to which it is made insensitive (the \"noise\"). While such statistics are hard to model axiomatically, they can be learned from examples. Following the spirit of the Wiener filter in signal processing, we show a learning scheme for the construction of optimized spectral descriptors and relate it to Mahalanobis metric learning. The superiority of the proposed approach in generating correspondences is demonstrated on synthetic and scanned human figures. We also show that the learned descriptors are robust enough to be learned on synthetic data and transferred successfully to scanned shapes.",
"title": ""
},
{
"docid": "ef0c5454b9b7854866712e897c29a198",
"text": "This paper presents a new online clustering algorithm called SAFN which is used to learn continuously evolving clusters from non-stationary data. The SAFN uses a fast adaptive learning procedure to take into account variations over time. In non-stationary and multi-class environment, the SAFN learning procedure consists of five main stages: creation, adaptation, mergence, split and elimination. Experiments are carried out in three kinds of datasets to illustrate the performance of the SAFN algorithm for online clustering. Compared with SAKM algorithm, SAFN algorithm shows better performance in accuracy of clustering and multi-class high-dimension data.",
"title": ""
},
{
"docid": "32a3ed78cd8abe70977ef28bede467fd",
"text": "Plagiarism in the sense of “theft of intellectual property” has been around for as long as humans have produced work of art and research. However, easy access to the Web, large databases, and telecommunication in general, has turned plagiarism into a serious problem for publishers, researchers and educational institutions. In this paper, we concentrate on textual plagiarism (as opposed to plagiarism in music, paintings, pictures, maps, technical drawings, etc.). We first discuss the complex general setting, then report on some results of plagiarism detection software and finally draw attention to the fact that any serious investigation in plagiarism turns up rather unexpected side-effects. We believe that this paper is of value to all researchers, educators and students and should be considered as seminal work that hopefully will encourage many still deeper investigations.",
"title": ""
},
{
"docid": "6c30f1a32c2e422ca2e2b416ac96632d",
"text": "The present paper has a profound literature review of the relation between cyber security, aviation and the vulnerabilities prone by the increasing use of information systems in aviation realm. Civil aviation is in the process of evolution of the air traffic management system through the introduction of new technologies. Therefore, the modernization of aeronautical communications are creating network security issues in aviation that have not been mitigated yet. The purpose of this thesis is to make a systematic qualitative analysis of the cyber-attacks against Automatic Dependent Surveillance Broadcast. With this analysis, the paper combines the knowledge of two fields which are meant to deal together with the security issues in aviation. The thesis focuses on the exploitation of the vulnerabilities of ADS-B and presents an analysis taking into account the perspective of cyber security and aviation experts. The threats to ADS-B are depicted, classified and evaluated by aviation experts, making use of interviews in order to determine the possible impact, and the actions that would follow in case a cyber-attack occurs. The results of the interviews show that some attacks do not really represent a real problem for the operators of the system and that other attacks may create enough confusion due to their complexity. The experience is a determinant factor for the operators of ADS-B, because based on that a set of mitigations was proposed by aviation experts that can help to cope in a cyberattack situation. This analysis can be used as a reference guide to understand the impact of cyber security threats in aviation and the need of the research and aviation communities to broaden the knowledge and to increase the level of expertise in order to face the challenges posed by network security issues. The thesis is in English and contains 58 pages of text, 5 chapters, 17 figures, 15 tables.",
"title": ""
},
{
"docid": "81a45cb4ca02c38839a81ad567eb1491",
"text": "Big data is often mined using clustering algorithms. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a popular spatial clustering algorithm. However, it is computationally expensive and thus for clustering big data, parallel processing is required. The two prevalent paradigms for parallel processing are High-Performance Computing (HPC) based on Message Passing Interface (MPI) or Open Multi-Processing (OpenMP) and the newer big data frameworks such as Apache Spark or Hadoop. This report surveys for these two different paradigms publicly available implementations that aim at parallelizing DBSCAN and compares their performance. As a result, it is found that the big data implementations are not yet mature and in particular for skewed data, the implementation’s decomposition of the input data into parallel tasks has a huge influence on the performance in terms of running time.",
"title": ""
},
{
"docid": "ebb6f9ab7918edc2b0746ee8ee244f4a",
"text": "P u b l i s h e d b y t h e I E E E C o m p u t e r S o c i e t y Pervasive Computing: A Paradigm for the 21st Century In 1991, Mark Weiser, then chief technology officer for Xerox’s Palo Alto Research Center, described a vision for 21st century computing that countered the ubiquity of personal computers. “The most profound technologies are those that disappear,” he wrote. “They weave themselves into the fabric of everyday life until they are indistinguishable from it.” Computing has since mobilized itself beyond the desktop PC. Significant hardware developments—as well as advances in location sensors, wireless communications, and global networking—have advanced Weiser’s vision toward technical and economic viability. Moreover, the Web has diffused some of the psychological barriers that he also thought would have to disappear. However, the integration of information technology into our lives still falls short of Weiser’s concluding vision:",
"title": ""
},
{
"docid": "e2f878f2ecc62bdbaa5e578f8a2b6be5",
"text": "A standard technique from the hashing literature is to use two hash functions h1(x) and h2(x) to simulate additional hash functions of the form gi(x) = h1(x) + ih2(x). We demonstrate that this technique can be usefully applied to Bloom filters and related data structures. Specifically, only two hash functions are necessary to effectively implement a Bloom filter without any loss in the asymptotic false positive probability. This leads to less computation and potentially less need for randomness in practice.",
"title": ""
},
{
"docid": "16ff4e6bef26c6c64e204373c657aa26",
"text": "We present the Mim-Solution's approach to the RecSys Challenge 2016, which ranked 2nd. The goal of the competition was to prepare job recommendations for the users of the website Xing.com.\n Our two phase algorithm consists of candidate selection followed by the candidate ranking. We ranked the candidates by the predicted probability that the user will positively interact with the job offer. We have used Gradient Boosting Decision Trees as the regression tool.",
"title": ""
},
{
"docid": "a478b6f7accfb227e6ee5a6b35cd7fa1",
"text": "This paper presents the development of an ultra-high-speed permanent magnet synchronous motor (PMSM) that produces output shaft power of 2000 W at 200 000 rpm with around 90% efficiency. Due to the guaranteed open-loop stability over the full operating speed range, the developed motor system is compact and low cost since it can avoid the design complexity of a closed-loop controller. This paper introduces the collaborative design approach of the motor system in order to ensure both performance requirements and stability over the full operating speed range. The actual implementation of the motor system is then discussed. Finally, computer simulation and experimental results are provided to validate the proposed design and its effectiveness",
"title": ""
},
{
"docid": "b5c64ddf3be731a281072a21700a85ee",
"text": "This paper addresses the problem of joint detection and recounting of abnormal events in videos. Recounting of abnormal events, i.e., explaining why they are judged to be abnormal, is an unexplored but critical task in video surveillance, because it helps human observers quickly judge if they are false alarms or not. To describe the events in the human-understandable form for event recounting, learning generic knowledge about visual concepts (e.g., object and action) is crucial. Although convolutional neural networks (CNNs) have achieved promising results in learning such concepts, it remains an open question as to how to effectively use CNNs for abnormal event detection, mainly due to the environment-dependent nature of the anomaly detection. In this paper, we tackle this problem by integrating a generic CNN model and environment-dependent anomaly detectors. Our approach first learns CNN with multiple visual tasks to exploit semantic information that is useful for detecting and recounting abnormal events. By appropriately plugging the model into anomaly detectors, we can detect and recount abnormal events while taking advantage of the discriminative power of CNNs. Our approach outperforms the state-of-the-art on Avenue and UCSD Ped2 benchmarks for abnormal event detection and also produces promising results of abnormal event recounting.",
"title": ""
},
{
"docid": "084b2787a6b79de789334c4dc8c14702",
"text": "Renewable energy is a key technology in reducing global carbon dioxide emissions. Currently, penetration of intermittent renewable energies in most power grids is low, such that the impact of renewable energy's intermittency on grid stability is controllable. Utility scale energy storage systems can enhance stability of power grids with increasing share of intermittent renewable energies. With the grid communication network in smart grids, mobile battery systems in battery electric vehicles and plug-in hybrid electric vehicles can also be used for energy storage and ancillary services in smart grids. This paper will review the stationary and mobile battery systems for grid voltage and frequency stability control in smart grids with increasing shares of intermittent renewable energies. An optimization algorithm on vehicle-to-grid operation will also be presented.",
"title": ""
}
] | scidocsrr |
a435b13af5148d705f23d65a7b07d8a0 | Continuous Hyper-parameter Learning for Support Vector Machines | [
{
"docid": "f1a162f64838817d78e97a3c3087fae4",
"text": "Most literature on support vector machines (SVMs) concentrates on the dual optimization problem. In this letter, we point out that the primal problem can also be solved efficiently for both linear and nonlinear SVMs and that there is no reason for ignoring this possibility. On the contrary, from the primal point of view, new families of algorithms for large-scale SVM training can be investigated.",
"title": ""
},
{
"docid": "dce51c1fed063c9d9776fce998209d25",
"text": "While classical kernel-based learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Lankriet et al. (2004) considered conic combinations of kernel matrices for classification, leading to a convex quadratically constrained quadratic program. We show that it can be rewritten as a semi-infinite linear program that can be efficiently solved by recycling the standard SVM implementations. Moreover, we generalize the formulation and our method to a larger class of problems, including regression and one-class classification. Experimental results show that the proposed algorithm works for hundred thousands of examples or hundreds of kernels to be combined, and helps for automatic model selection, improving the interpretability of the learning result. In a second part we discuss general speed up mechanism for SVMs, especially when used with sparse feature maps as appear for string kernels, allowing us to train a string kernel SVM on a 10 million real-world splice dataset from computational biology. We integrated Multiple Kernel Learning in our Machine Learning toolbox SHOGUN for which the source code is publicly available at http://www.fml.tuebingen.mpg.de/raetsch/projects/shogun.",
"title": ""
}
] | [
{
"docid": "6a8a849bc8272a7b73259e732e3be81b",
"text": "Northrop Grumman is developing an atom-based magnetometer technology that has the potential for providing a global position reference independent of GPS. The NAV-CAM sensor is a direct outgrowth of the Nuclear Magnetic Resonance Gyro under development by the same technical team. It is capable of providing simultaneous measurements of all 3 orthogonal axes of magnetic vector field components using a single compact vapor cell. The vector sum determination of the whole-field scalar measurement achieves similar precision to the individual vector components. By using a single sensitive element (vapor cell) this approach eliminates many of the problems encountered when using physically separate sensors or sensing elements.",
"title": ""
},
{
"docid": "31432fe0f313b5ffd929be5f37b2c029",
"text": "We review the field of femtosecond pulse shaping, in which Fourier synthesis methods are used to generate nearly arbitrarily shaped ultrafast optical wave forms according to user specification. An emphasis is placed on programmable pulse shaping methods based on the use of spatial light modulators. After outlining the fundamental principles of pulse shaping, we then present a detailed discussion of pulse shaping using several different types of spatial light modulators. Finally, new research directions in pulse shaping, and applications of pulse shaping to optical communications, biomedical optical imaging, high power laser amplifiers, quantum control, and laser-electron beam interactions are reviewed. ©2000 American Institute of Physics. @S0034-6748 ~00!02005-0#",
"title": ""
},
{
"docid": "378f881bb955777e69b5aeff090c53fe",
"text": "Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques.",
"title": ""
},
{
"docid": "4411ff57ab4fbfdff76501fe2e3f6f4a",
"text": "Incorporating wireless transceivers with numerous antennas (such as Massive-MIMO) is a prospective way to increase the link capacity or enhance the energy efficiency of future communication systems. However, the benefits of such approach can be realized only when proper channel information is available at the transmitter. Since the amount of the channel information required by the transmitter is large with so many antennas, the feedback is arduous in practice, especially for frequency division duplexing (FDD) systems. This paper proposes channel feedback reduction techniques based on the theory of compressive sensing, which permits the transmitter to obtain channel information with acceptable accuracy under substantially reduced feedback load. Furthermore, by leveraging properties of compressive sensing, we present two adaptive feedback protocols, in which the feedback content can be dynamically configured based on channel conditions to improve the efficiency.",
"title": ""
},
{
"docid": "d75f9c632d197040c7f6d2939b19c215",
"text": "OBJECTIVE\nTo understand belief in a specific scientific claim by studying the pattern of citations among papers stating it.\n\n\nDESIGN\nA complete citation network was constructed from all PubMed indexed English literature papers addressing the belief that beta amyloid, a protein accumulated in the brain in Alzheimer's disease, is produced by and injures skeletal muscle of patients with inclusion body myositis. Social network theory and graph theory were used to analyse this network.\n\n\nMAIN OUTCOME MEASURES\nCitation bias, amplification, and invention, and their effects on determining authority.\n\n\nRESULTS\nThe network contained 242 papers and 675 citations addressing the belief, with 220,553 citation paths supporting it. Unfounded authority was established by citation bias against papers that refuted or weakened the belief; amplification, the marked expansion of the belief system by papers presenting no data addressing it; and forms of invention such as the conversion of hypothesis into fact through citation alone. Extension of this network into text within grants funded by the National Institutes of Health and obtained through the Freedom of Information Act showed the same phenomena present and sometimes used to justify requests for funding.\n\n\nCONCLUSION\nCitation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation.",
"title": ""
},
{
"docid": "4fc2c8064cc85781d567ea95e66083ed",
"text": "Visualization is a powerful tool for analysing data and presenting results in science, engineering and medicine. This paper reviews ways in which it can be used in distributed and/or collaborative enviroments. Distributed visualization addresses a number of resource allocation problems, including the location of processing close to data for the minimization of data traffic. The advent of the Grid Computing paradigm and the link to Web Services provides fresh challenges and opportunities for distributed visualization—including the close coupling of simulations and visualizations in a steering environment. Recent developments in collaboration have seen the growth of specialized facilities (such as Access Grid) which have supplemented traditional desktop video conferencing using the Internet and multicast communications. Collaboration allows multiple users—possibly at remote sites—to take part in the visualization process at levels which range from the viewing of images to the shared control of the visualization methods. In this review, we present a model framework for distributed and collaborative visualization and assess a selection of visualization systems and frameworks for their use in a distributed or collaborative environment. We also discuss some examples of enabling technology and review recent work from research projects in this field.",
"title": ""
},
{
"docid": "567f27921ee05e125806db1d75460e77",
"text": "Face caricatures are widely used in political cartoons and generating caricatures from images has become a popular research topic recently. The main challenge lies in achieving nice artistic effect and capturing face characteristics by exaggerating the most featured parts while keeping the resemblance to the original image. In this paper, a sketch-based face caricature synthesis framework is proposed to generate and exaggerate the face caricature from a single near-frontal picture. We first present an effective and robust face component rendering method using Adaptive Thresholding to eliminate the influence of illumination by separating face components into layers. Then, we propose an automatic exaggeration method, in which face component features are trained using Support Vector Machine (SVM) and then amplified using image processing techniques to make the caricature more hilarious and thus more impressive. After that, a hair rendering method is presented, which synthesizes hair in the same caricature style using edge-detection techniques. Practical results show that the synthesized face caricatures are of great artistic effect and well characterized, and our method is robust and efficient even under unfavorable lighting conditions.",
"title": ""
},
{
"docid": "56b95a744d7bcb89462db4abbf33852a",
"text": "Recent studies have revealed that emerging modern machine learning techniques are advantageous to statistical models for text classification, such as SVM. In this study, we discuss the applications of the support vector machine with mixture of kernel (SVM-MK) to design a text classification system. Differing from the standard SVM, the SVM-MK uses the 1-norm based object function and adopts the convex combinations of single feature basic kernels. Only a linear programming problem needs to be resolved and it greatly reduces the computational costs. More important, it is a transparent model and the optimal feature subset can be obtained automatically. A real Chinese corpus from Fudan University is used to demonstrate the good performance of the SVMMK.",
"title": ""
},
{
"docid": "3b9e33ca0f2e479c58e3290f5c3ee2d5",
"text": "BACKGROUND\nCardiac complications due to iron overload are the most common cause of death in patients with thalassemia major. The aim of this study was to compare iron chelation effects of deferoxamine, deferasirox, and combination of deferoxamine and deferiprone on cardiac and liver iron load measured by T2* MRI.\n\n\nMETHODS\nIn this study, 108 patients with thalassemia major aged over 10 years who had iron overload in cardiac T2* MRI were studied in terms of iron chelators efficacy on the reduction of myocardial siderosis. The first group received deferoxamine, the second group only deferasirox, and the third group, a combination of deferoxamine and deferiprone. Myocardial iron was measured at baseline and 12 months later through T2* MRI technique.\n\n\nRESULTS\nThe three groups were similar in terms of age, gender, ferritin level, and mean myocardial T2* at baseline. In the deferoxamine group, myocardial T2* was increased from 12.0±4.1 ms at baseline to 13.5±8.4 ms at 12 months (p=0.10). Significant improvement was observed in myocardial T2* of the deferasirox group (p<0.001). In the combined treatment group, myocardial T2* was significantly increased (p<0.001). These differences among the three groups were not significant at the 12 months. A significant improvement was observed in liver T2* at 12 months compared to baseline in the deferasirox and the combination group.\n\n\nCONCLUSION\nIn comparison to deferoxamine monotherapy, combination therapy and deferasirox monotherapy have a significant impact on reducing iron overload and improvement of myocardial and liver T2* MRI.",
"title": ""
},
{
"docid": "0e56318633147375a1058a6e6803e768",
"text": "150/150). Large-scale distributed analyses of over 30,000 MRI scans recently detected common genetic variants associated with the volumes of subcortical brain structures. Scaling up these efforts, still greater computational challenges arise in screening the genome for statistical associations at each voxel in the brain, localizing effects using “image-wide genome-wide” testing (voxelwise GWAS, vGWAS). Here we benefit from distributed computations at multiple sites to meta-analyze genome-wide image-wide data, allowing private genomic data to stay at the site where it was collected. Site-specific tensorbased morphometry (TBM) is performed with a custom template for each site, using a multi channel registration. A single vGWAS testing 10 variants against 2 million voxels can yield hundreds of TB of summary statistics, which would need to be transferred and pooled for meta-analysis. We propose a 2-step method, which reduces data transfer for each site to a subset of SNPs and voxels guaranteed to contain all significant hits.",
"title": ""
},
{
"docid": "fd0c32b1b4e52f397d0adee5de7e381c",
"text": "Context. Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective. In this work, we review 156 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, braincomputer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods. Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to 1) the data, 2) the preprocessing methodology, 3) the DL design choices, 4) the results, and 5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results. Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About 40% of the studies used convolutional neural networks (CNNs), while 14% used recurrent neural networks (RNNs), most often with a total of 3 to 10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was 5.4% across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. ∗The first two authors contributed equally to this work. Significance. To help the community progress and share work more effectively, we provide a list of recommendations for future studies. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly.",
"title": ""
},
{
"docid": "bfdf6e8e98793388dcf8f13b7147faf0",
"text": "Recently, Long Term Evolution (LTE) has developed a femtocell for indoor coverage extension. However, interference problem between the femtocell and the macrocell should be solved in advance. In this paper, we propose an interference management scheme in the LTE femtocell systems using Fractional Frequency Reuse (FFR). Under the macrocell allocating frequency band by the FFR, the femtocell chooses sub-bands which are not used in the macrocell sub-area to avoid interference. Simulation results show that proposed scheme enhances total/edge throughputs and reduces the outage probability in overall network, especially for the cell edge users.",
"title": ""
},
{
"docid": "b5df59d926ca4778c306b255d60870a1",
"text": "In this paper the transcription and evaluation of the corpus DIMEx100 for Mexican Spanish is presented. First we describe the corpus and explain the linguistic and computational motivation for its design and collection process; then, the phonetic antecedents and the alphabet adopted for the transcription task are presented; the corpus has been transcribed at three different granularity levels, which are also specified in detail. The corpus statistics for each transcription level are also presented. A set of phonetic rules describing phonetic context observed empirically in spontaneous conversation is also validated with the transcription. The corpus has been used for the construction of acoustic models and a phonetic dictionary for the construction of a speech recognition system. Initial performance results suggest that the data can be used to train good quality acoustic models.",
"title": ""
},
{
"docid": "0f7c98d1071d95ef537d5534f994f435",
"text": "Zhaohui Xue 1,*, Peijun Du 2,3,4, Hongjun Su 1 and Shaoguang Zhou 1 1 School of Earth Sciences and Engineering, Hohai University, Nanjing 211100, China; hjsu1@163.com (H.S.); zhousg1966@126.com (S.Z.) 2 Key Laboratory for Satellite Mapping Technology and Applications of National Administration of Surveying, Mapping and Geoinformation of China, Nanjing University, Nanjing 210023, China; dupjrs@gmail.com 3 Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210023, China 4 Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing University, Nanjing 210023, China * Correspondence: zhaohui.xue@hhu.edu.cn",
"title": ""
},
{
"docid": "c45bec7edcd1e8337926db90d3663797",
"text": "The dramatically growing demand of Cyber Physical and Social Computing (CPSC) has enabled a variety of novel channels to reach services in the financial industry. Combining cloud systems with multimedia big data is a novel approach for Financial Service Institutions (FSIs) to diversify service offerings in an efficient manner. However, the security issue is still a great issue in which the service availability often conflicts with the security constraints when the service media channels are varied. This paper focuses on this problem and proposes a novel approach using the Semantic-Based Access Control (SBAC) techniques for acquiring secure financial services on multimedia big data in cloud computing. The proposed approach is entitled IntercroSsed Secure Big Multimedia Model (2SBM), which is designed to secure accesses between various media through the multiple cloud platforms. The main algorithms supporting the proposed model include the Ontology-Based Access Recognition (OBAR) Algorithm and the Semantic Information Matching (SIM) Algorithm. We implement an experimental evaluation to prove the correctness and adoptability of our proposed scheme.",
"title": ""
},
{
"docid": "e6cbd8d32233e7e683b63a5a1a0e91f8",
"text": "Background:Quality of life is an important end point in clinical trials, yet there are few quality of life questionnaires for neuroendocrine tumours.Methods:This international multicentre validation study assesses the QLQ-GINET21 Quality of Life Questionnaire in 253 patients with gastrointestinal neuroendocrine tumours. All patients were requested to complete two quality of life questionnaires – the EORTC Core Quality of Life questionnaire (QLQ-C30) and the QLQ-GINET21 – at baseline, and at 3 and 6 months post-baseline; the psychometric properties of the questionnaire were then analysed.Results:Analysis of QLQ-GINET21 scales confirmed appropriate aggregation of the items, except for treatment-related symptoms, where weight gain showed low correlation with other questions in the scale; weight gain was therefore analysed as a single item. Internal consistency of scales using Cronbach’s α coefficient was >0.7 for all parts of the QLQ-GINET21 at 6 months. Intraclass correlation was >0.85 for all scales. Discriminant validity was confirmed, with values <0.70 for all scales compared with each other.Scores changed in accordance with alterations in performance status and in response to expected clinical changes after therapies. Mean scores were similar for pancreatic and other tumours.Conclusion:The QLQ-GINET21 is a valid and responsive tool for assessing quality of life in the gut, pancreas and liver neuroendocrine tumours.",
"title": ""
},
{
"docid": "6440be547f86da7e08b79eac6b4311fe",
"text": "OBJECTIVE\nTo assess the bioequivalence of an ezetimibe/simvastatin (EZE/SIMVA) combination tablet compared to the coadministration of ezetimibe and simvastatin as separate tablets (EZE + SIMVA).\n\n\nMETHODS\nIn this open-label, randomized, 2-part, 2-period crossover study, 96 healthy subjects were randomly assigned to participate in each part of the study (Part I or II), with each part consisting of 2 single-dose treatment periods separated by a 14-day washout. Part I consisted of Treatments A (EZE 10 mg + SIMVA 10 mg) and B (EZE/SIMVA 10/10 mg/mg) and Part II consisted of Treatments C (EZE 10 mg + SIMVA 80 mg) and D (EZE/SIMVA 10/80 mg/mg). Blood samples were collected up to 96 hours post-dose for determination of ezetimibe, total ezetimibe (ezetimibe + ezetimibe glucuronide), simvastatin and simvastatin acid (the most prevalent active metabolite of simvastatin) concentrations. Ezetimibe and simvastatin acid AUC(0-last) were predefined as primary endpoints and ezetimibe and simvastatin acid Cmax were secondary endpoints. Bioequivalence was achieved if 90% confidence intervals (CI) for the geometric mean ratios (GMR) (single tablet/coadministration) of AUC(0-last) and Cmax fell within prespecified bounds of (0.80, 1.25).\n\n\nRESULTS\nThe GMRs of the AUC(0-last) and Cmax for ezetimibe and simvastatin acid fell within the bioequivalence limits (0.80, 1.25). EZE/ SIMVA and EZE + SIMVA were generally well tolerated.\n\n\nCONCLUSIONS\nThe lowest and highest dosage strengths of EZE/SIMVA tablet were bioequivalent to the individual drug components administered together. Given the exact weight multiples of the EZE/SIMVA tablet and linear pharmacokinetics of simvastatin across the marketed dose range, bioequivalence of the intermediate tablet strengths (EZE/SIMVA 10/20 mg/mg and EZE/SIMVA 10/40 mg/mg) was inferred, although these dosages were not tested directly. These results indicate that the safety and efficacy profile of EZE + SIMVA coadministration therapy can be applied to treatment with the EZE/SIMVA tablet across the clinical dose range.",
"title": ""
},
{
"docid": "3e075d0914eb43b94f86ede42f079544",
"text": "We present an algorithm for curve skeleton extraction via Laplacian-based contraction. Our algorithm can be applied to surfaces with boundaries, polygon soups, and point clouds. We develop a contraction operation that is designed to work on generalized discrete geometry data, particularly point clouds, via local Delaunay triangulation and topological thinning. Our approach is robust to noise and can handle moderate amounts of missing data, allowing skeleton-based manipulation of point clouds without explicit surface reconstruction. By avoiding explicit reconstruction, we are able to perform skeleton-driven topology repair of acquired point clouds in the presence of large amounts of missing data. In such cases, automatic surface reconstruction schemes tend to produce incorrect surface topology. We show that the curve skeletons we extract provide an intuitive and easy-to-manipulate structure for effective topology modification, leading to more faithful surface reconstruction.",
"title": ""
},
{
"docid": "1328ced6939005175d3fbe2ef95fd067",
"text": "We present SNIPER, an algorithm for performing efficient multi-scale training in instance level visual recognition tasks. Instead of processing every pixel in an image pyramid, SNIPER processes context regions around ground-truth instances (referred to as chips) at the appropriate scale. For background sampling, these context-regions are generated using proposals extracted from a region proposal network trained with a short learning schedule. Hence, the number of chips generated per image during training adaptively changes based on the scene complexity. SNIPER only processes 30% more pixels compared to the commonly used single scale training at 800x1333 pixels on the COCO dataset. But, it also observes samples from extreme resolutions of the image pyramid, like 1400x2000 pixels. As SNIPER operates on resampled low resolution chips (512x512 pixels), it can have a batch size as large as 20 on a single GPU even with a ResNet-101 backbone. Therefore it can benefit from batch-normalization during training without the need for synchronizing batch-normalization statistics across GPUs. SNIPER brings training of instance level recognition tasks like object detection closer to the protocol for image classification and suggests that the commonly accepted guideline that it is important to train on high resolution images for instance level visual recognition tasks might not be correct. Our implementation based on Faster-RCNN with a ResNet-101 backbone obtains an mAP of 47.6% on the COCO dataset for bounding box detection and can process 5 images per second during inference with a single GPU. Code is available at https://github.com/mahyarnajibi/SNIPER/.",
"title": ""
},
{
"docid": "0f24b6c36586505c1f4cc001e3ddff13",
"text": "A novel model for asymmetric multiagent reinforcement learning is introduced in this paper. The model addresses the problem where the information states of the agents involved in the learning task are not equal; some agents (leaders) have information how their opponents (followers) will select their actions and based on this information leaders encourage followers to select actions that lead to improved payoffs for the leaders. This kind of configuration arises e.g. in semi-centralized multiagent systems with an external global utility associated to the system. We present a brief literature survey of multiagent reinforcement learning based on Markov games and then propose an asymmetric learning model that utilizes the theory of Markov games. Additionally, we construct a practical learning method based on the proposed learning model and study its convergence properties. Finally, we test our model with a simple example problem and a larger two-layer pricing application.",
"title": ""
}
] | scidocsrr |
dd2aa4d810c644b63fb3357ff15f2a83 | Personality and life satisfaction: a facet-level analysis. | [
{
"docid": "627b14801c8728adf02b75e8eb62896f",
"text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.",
"title": ""
},
{
"docid": "93b3abcc741223c9793acce1a7c7647b",
"text": "The authors examined the interplay of personality and cultural factors in the prediction of the affective (hedonic balance) and the cognitive (life satisfaction) components of subjective well-being (SWB). They predicted that the influence of personality on life satisfaction is mediated by hedonic balance and that the relation between hedonic balance and life satisfaction is moderated by culture. As a consequence, they predicted that the influence of personality on life satisfaction is also moderated by culture. Participants from 2 individualistic cultures (United States, Germany) and 3 collectivistic cultures (Japan, Mexico, Ghana) completed measures of Extraversion, Neuroticism, hedonic balance, and life satisfaction. As predicted, Extraversion and Neuroticism influenced hedonic balance to the same degree in all cultures, and hedonic balance was a stronger predictor of life satisfaction in individualistic than in collectivistic cultures. The influence of Extraversion and Neuroticism on life satisfaction was largely mediated by hedonic balance. The results suggest that the influence of personality on the emotional component of SWB is pancultural, whereas the influence of personality on the cognitive component of SWB is moderated by culture.",
"title": ""
}
] | [
{
"docid": "61980865ef90d0236af464caf2005024",
"text": "Driver fatigue has become one of the major causes of traffic accidents, and is a complicated physiological process. However, there is no effective method to detect driving fatigue. Electroencephalography (EEG) signals are complex, unstable, and non-linear; non-linear analysis methods, such as entropy, maybe more appropriate. This study evaluates a combined entropy-based processing method of EEG data to detect driver fatigue. In this paper, 12 subjects were selected to take part in an experiment, obeying driving training in a virtual environment under the instruction of the operator. Four types of enthrones (spectrum entropy, approximate entropy, sample entropy and fuzzy entropy) were used to extract features for the purpose of driver fatigue detection. Electrode selection process and a support vector machine (SVM) classification algorithm were also proposed. The average recognition accuracy was 98.75%. Retrospective analysis of the EEG showed that the extracted features from electrodes T5, TP7, TP8 and FP1 may yield better performance. SVM classification algorithm using radial basis function as kernel function obtained better results. A combined entropy-based method demonstrates good classification performance for studying driver fatigue detection.",
"title": ""
},
{
"docid": "c97eb53dcf3c1a1ecf6455f6489fa93e",
"text": "Emotions form a very important and basic aspect of our lives. Whatever we do, whatever we say, somehow does reflect some of our emotions, though may not be directly. To understand the very fundamental behavior of a human, we need to analyze these emotions through some emotional data, also called, the affect data. This data can be text, voice, facial expressions etc. Using this emotional data for analyzing the emotions also forms an interdisciplinary field, called Affective Computing. Computation of emotions is a very challenging task, much work has been done but many more increments are also possible. With the advent of social networking sites, many people tend to get attracted towards analyzing this very text available on these various sites. Analyzing this data over the Internet means we are spanning across the whole continent, going through all the cultures and communities across. This paper summarizes the previous works done in the field of textual emotion analysis based on various emotional models and computational approaches used.",
"title": ""
},
{
"docid": "da03427eb4874bd90903674b6ffe9897",
"text": "The network provides a method of communication to distribute information to the masses. With the growth of data communication over computer network, the security of information has become a major issue. Steganography and cryptography are two different data hiding techniques. Steganography hides messages inside some other digital media. Cryptography, on the other hand obscures the content of the message. We propose a high capacity data embedding approach by the combination of Steganography and cryptography. In the process a message is first encrypted using transposition cipher method and then the encrypted message is embedded inside an image using LSB insertion method. The combination of these two methods will enhance the security of the data embedded. This combinational methodology will satisfy the requirements such as capacity, security and robustness for secure data transmission over an open channel. A comparative analysis is made to demonstrate the effectiveness of the proposed method by computing Mean square error (MSE) and Peak Signal to Noise Ratio (PSNR). We analyzed the data hiding technique using the image performance parameters like Entropy, Mean and Standard Deviation. The stego images are tested by transmitting them and the embedded data are successfully extracted by the receiver. The main objective in this paper is to provide resistance against visual and statistical attacks as well as high capacity.",
"title": ""
},
{
"docid": "4d7e876d61060061ba6419869d00675e",
"text": "Context-aware recommender systems (CARS) take context into consideration when modeling user preferences. There are two general ways to integrate context with recommendation: contextual filtering and contextual modeling. Currently, the most effective context-aware recommendation algorithms are based on a contextual modeling approach that estimate deviations in ratings across different contexts. In this paper, we propose context similarity as an alternative contextual modeling approach and examine different ways to represent context similarity and incorporate it into recommendation. More specifically, we show how context similarity can be integrated into the sparse linear method and matrix factorization algorithms. Our experimental results demonstrate that learning context similarity is a more effective approach to contextaware recommendation than modeling contextual rating deviations.",
"title": ""
},
{
"docid": "61c6d49c3cdafe4366d231ebad676077",
"text": "Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user's spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users' spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis. Lastly, we identify several challenges in this field and put forward recommendations for future research.",
"title": ""
},
{
"docid": "ad28f7adf67af517f0568d6cb60bcbd2",
"text": "BACKGROUND\nD2 gastrectomy is recommended in US and European guidelines, and is preferred in east Asia, for patients with resectable gastric cancer. Adjuvant chemotherapy improves patient outcomes after surgery, but the benefits after a D2 resection have not been extensively investigated in large-scale trials. We investigated the effect on disease-free survival of adjuvant chemotherapy with capecitabine plus oxaliplatin after D2 gastrectomy compared with D2 gastrectomy only in patients with stage II-IIIB gastric cancer.\n\n\nMETHODS\nThe capecitabine and oxaliplatin adjuvant study in stomach cancer (CLASSIC) study was an open-label, parallel-group, phase 3, randomised controlled trial undertaken in 37 centres in South Korea, China, and Taiwan. Patients with stage II-IIIB gastric cancer who had had curative D2 gastrectomy were randomly assigned to receive adjuvant chemotherapy of eight 3-week cycles of oral capecitabine (1000 mg/m(2) twice daily on days 1 to 14 of each cycle) plus intravenous oxaliplatin (130 mg/m(2) on day 1 of each cycle) for 6 months or surgery only. Block randomisation was done by a central interactive computerised system, stratified by country and disease stage. Patients, and investigators giving interventions, assessing outcomes, and analysing data were not masked. The primary endpoint was 3 year disease-free survival, analysed by intention to treat. This study reports a prespecified interim efficacy analysis, after which the trial was stopped after a recommendation by the data monitoring committee. The trial is registered at ClinicalTrials.gov (NCT00411229).\n\n\nFINDINGS\n1035 patients were randomised (520 to receive chemotherapy and surgery, 515 surgery only). Median follow-up was 34·2 months (25·4-41·7) in the chemotherapy and surgery group and 34·3 months (25·6-41·9) in the surgery only group. 3 year disease-free survival was 74% (95% CI 69-79) in the chemotherapy and surgery group and 59% (53-64) in the surgery only group (hazard ratio 0·56, 95% CI 0·44-0·72; p<0·0001). Grade 3 or 4 adverse events were reported in 279 of 496 patients (56%) in the chemotherapy and surgery group and in 30 of 478 patients (6%) in the surgery only group. The most common adverse events in the intervention group were nausea (n=326), neutropenia (n=300), and decreased appetite (n=294).\n\n\nINTERPRETATION\nAdjuvant capecitabine plus oxaliplatin treatment after curative D2 gastrectomy should be considered as a treatment option for patients with operable gastric cancer.\n\n\nFUNDING\nF Hoffmann-La Roche and Sanofi-Aventis.",
"title": ""
},
{
"docid": "0c7f2f7554927d61fbad7f2cb1045b03",
"text": "This paper reports a mobile application pre-launch scheme that is based on user's emotion. Smartphone application's usage and smartwatch's internal sensors are exploited to predict user's intension. User's emotion can be extracted from the PPG sensor in the smartwatch. In this paper, we extend previous App pre-launch service with user's emotion data. Applying machine learning algorithm to the training data, we can predict the application to be executed in near future. With our emotion context, we expect we can predict user's intension more accurately.",
"title": ""
},
{
"docid": "1fec7e850333576193bce7f4f4ecc2f3",
"text": "We study several machine learning algorithms for cross-lan7guage patent retrieval and classification. In comparison with most of other studies involving machine learning for cross-language information retrieval, which basically used learning techniques for monolingual sub-tasks, our learning algorithms exploit the bilingual training documents and learn a semantic representation from them. We study Japanese–English cross-language patent retrieval using Kernel Canonical Correlation Analysis (KCCA), a method of correlating linear relationships between two variables in kernel defined feature spaces. The results are quite encouraging and are significantly better than those obtained by other state of the art methods. We also investigate learning algorithms for cross-language document classification. The learning algorithm are based on KCCA and Support Vector Machines (SVM). In particular, we study two ways of combining the KCCA and SVM and found that one particular combination called SVM_2k achieved better results than other learning algorithms for either bilingual or monolingual test documents. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "157c36eaad7fe6cb6188a17c1df98507",
"text": "We describe a new method for accurate retinal vessel detection in wide-field fluorescein angiography (FA), which is a challenging problem because of the variations in vasculature between different orientations and large and small vessels, and the changes in the vasculature appearance as the injection of the dye perfuses the retina. Decomposing the original FA image into multiple resolutions, the vessels at each scale are segmented independently by first correcting for inhomogeneous illumination, then applying morphological operations to extract rectilinear structure and finally applying adaptive binarization. Specifically, a modified top-hat filter is applied using linear structuring elements with 9 directions. The maximum value of the resulting response images at each pixel location is then used for adaptive binarization. Final vessel segments are identified by fusing vessel segments at each scale. Quantitative results on VAMPIRE dataset, which includes high resolution wide-field FA images and hand-labeled ground truth vessel segments, demonstrate that the proposed method provides a significant improvement on vessel detection (approximately 10% higher recall, with same precision) than the method originally published with VAMPIRE dataset.",
"title": ""
},
{
"docid": "b6dd22ef29a87dac6b56373ce3c5f9cd",
"text": "Traditionally, object-oriented software adopts the Observer pattern to implement reactive behavior. Its drawbacks are well-documented and two families of alternative approaches have been proposed, extending object-oriented languages with concepts from functional reactive and dataflow programming, respectively event-driven programming. The former hardly escape the functional setting; the latter do not achieve the declarativeness of more functional approaches.\n In this paper, we present REScala, a reactive language which integrates concepts from event-based and functional-reactive programming into the object-oriented world. REScala supports the development of reactive applications by fostering a functional declarative style which complements the advantages of object-oriented design.",
"title": ""
},
{
"docid": "536ece61ce55754140410dc6a12835ba",
"text": "First derived from human intuition, later adapted to machine translation for automatic token alignment, attention mechanism, a simple method that can be used for encoding sequence data based on the importance score each element is assigned, has been widely applied to and attained significant improvement in various tasks in natural language processing, including sentiment classification, text summarization, question answering, dependency parsing, etc. In this paper, we survey through recent works and conduct an introductory summary of the attention mechanism in different NLP problems, aiming to provide our readers with basic knowledge on this widely used method, discuss its different variants for different tasks, explore its association with other techniques in machine learning, and examine methods for evaluating its performance.",
"title": ""
},
{
"docid": "0e12ea5492b911c8879cc5e79463c9fa",
"text": "In this paper, we propose a complete on-device 3D reconstruction pipeline for mobile monocular hand-held devices, which generates dense 3D models with absolute scale on-site while simultaneously supplying the user with real-time interactive feedback. The method fills a gap in current cloud-based mobile reconstruction services as it ensures at capture time that the acquired image set fulfills desired quality and completeness criteria. In contrast to existing systems, the developed framework offers multiple innovative solutions. In particular, we investigate the usability of the available on-device inertial sensors to make the tracking and mapping process more resilient to rapid motions and to estimate the metric scale of the captured scene. Moreover, we propose an efficient and accurate scheme for dense stereo matching which allows to reduce the processing time to interactive speed. We demonstrate the performance of the reconstruction pipeline on multiple challenging indoor and outdoor scenes of different size and depth variability.",
"title": ""
},
{
"docid": "7ec6540b44b23a0380dcb848239ccac4",
"text": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on information highways. The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with additional references, experiments and analysis.",
"title": ""
},
{
"docid": "64b97f3b00f5803ce949d84968cb1d4b",
"text": "Social network and online news media are gaining popularity in recent years. Meanwhile, online fake news are becoming widespread. As a result, automating fake news detection is essential to maintain robust online media and social network. In this work, machine learning methods are employed to detect the stance of newspaper headlines on their bodies, which can serve as an important indication of content authenticity. If the newspaper headline is defined to be “unrelated” to their bodies, it indicates a high probability of the news to be “fake”. Specifically, multiple methods are used to extract features relevant to stance detection from a collection of headlines and news article bodies with different stances. These features are then used to train multiple machine learning models including support vector machines, multinomial Naive Bayes, Softmax, and multilayer perceptron. We have demonstrated very high accuracy to detect relevance between the headlines and bodies. This work can be used as a important building block for fake news detection.",
"title": ""
},
{
"docid": "4a4a11d2779eab866ff32c564e54b69d",
"text": "Although backpropagation neural networks generally predict better than decision trees do for pattern classiication problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, more often than not, explicit knowledge is needed by human experts. This work drives a symbolic representation for neural networks to make explicit each prediction of a neural network. An algorithm is proposed and implemented to extract symbolic rules from neural networks. Explicitness of the extracted rules is supported by comparing the symbolic rules generated by decision trees methods. Empirical study demonstrates that the proposed algorithm generates high quality rules from neural networks comparable with those of decision trees in terms of predictive accuracy, number of rules and average number of conditions for a rule. The symbolic rules from nerual networks preserve high predictive accuracy of original networks. An early and shorter version of this paper has been accepted for presentation at IJCAI'95.",
"title": ""
},
{
"docid": "6952a28e63c231c1bfb43391a21e80fd",
"text": "Deep learning has attracted tremendous attention from researchers in various fields of information engineering such as AI, computer vision, and language processing [Kalchbrenner and Blunsom, 2013; Krizhevsky et al., 2012; Mnih et al., 2013], but also from more traditional sciences such as physics, biology, and manufacturing [Anjos et al., 2015; Baldi et al., 2014; Bergmann et al., 2014]. Neural networks, image processing tools such as convolutional neural networks, sequence processing models such as recurrent neural networks, and regularisation tools such as dropout, are used extensively. However, fields such as physics, biology, and manufacturing are ones in which representing model uncertainty is of crucial importance [Ghahramani, 2015; Krzywinski and Altman, 2013]. With the recent shift in many of these fields towards the use of Bayesian uncertainty [Herzog and Ostwald, 2013; Nuzzo, 2014; Trafimow and Marks, 2015], new needs arise from deep learning. In this work we develop tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation. In the first part of this thesis we develop the theory for such tools, providing applications and illustrative examples. We tie approximate inference in Bayesian models to dropout and other stochastic regularisation techniques, and assess the approximations empirically. We give example applications arising from this connection between modern deep learning and Bayesian modelling such as active learning of image data and data efficient deep reinforcement learning. We further demonstrate the method’s practicality through a survey of recent applications making use of the suggested tools in language applications, medical diagnostics, bioinformatics, image processing, and autonomous driving. In the second part of the thesis we explore its theoretical implications, and the insights stemming from the link between Bayesian modelling and deep learning. We discuss what determines model uncertainty properties, analyse the approximate inference analytically in the linear case, and theoretically examine various priors such as spike and slab priors.",
"title": ""
},
{
"docid": "8e09b4718b472dbb7df2bc4ab8d8750a",
"text": "In this article, we propose an access control mechanism for Web-based social networks, which adopts a rule-based approach for specifying access policies on the resources owned by network participants, and where authorized users are denoted in terms of the type, depth, and trust level of the relationships existing between nodes in the network. Different from traditional access control systems, our mechanism makes use of a semidecentralized architecture, where access control enforcement is carried out client-side. Access to a resource is granted when the requestor is able to demonstrate being authorized to do that by providing a proof. In the article, besides illustrating the main notions on which our access control model relies, we present all the protocols underlying our system and a performance study of the implemented prototype.",
"title": ""
},
{
"docid": "162f080444935117c5125ae8b7c3d51e",
"text": "The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter’s loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.1",
"title": ""
},
{
"docid": "4845ff303fef156ceb74d1e4529f5e03",
"text": "We report a novel approach for the detection of volatile compounds employing electrostatically driven drumhead resonators as sensing elements. The resonators are based on freestanding membranes of alkanedithiol cross-linked gold nanoparticles (GNPs), which are able to sorb analytes from the gas phase. Under reduced pressure, the fundamental resonance frequency of a resonator is continuously monitored while the device is exposed to varying partial pressures of toluene, 4-methylpentan-2-one, 1-propanol, and water. The measurements reveal a strong, reversible frequency shift of up to ∼10 kHz, i.e., ∼5% of the fundamental resonance frequency, when exposing the sensor to toluene vapor with a partial pressure of ∼20 Pa. As this strong shift cannot be explained exclusively by the mass uptake in the membrane, our results suggest a significant impact of analyte sorption on the pre-stress of the freestanding GNP membrane. Thus, our findings point to the possibility of designing highly sensitive resonators, which utilize sorption induced changes in the membrane's pre-stress as primary transduction mechanism.",
"title": ""
},
{
"docid": "b24772af47f76db0f19ee281cccaa03f",
"text": "We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.",
"title": ""
}
] | scidocsrr |
1858e8fa3f0ff4249bd007abf7679481 | The effectiveness of mindfulness based programs in reducing stress experienced by nurses in adult hospital settings: a systematic review of quantitative evidence protocol. | [
{
"docid": "e4628211d0d2657db387c093228e9b9b",
"text": "BACKGROUND\nMindfulness-based stress reduction (MBSR) is a clinically standardized meditation that has shown consistent efficacy for many mental and physical disorders. Less attention has been given to the possible benefits that it may have in healthy subjects. The aim of the present review and meta-analysis is to better investigate current evidence about the efficacy of MBSR in healthy subjects, with a particular focus on its benefits for stress reduction.\n\n\nMATERIALS AND METHODS\nA literature search was conducted using MEDLINE (PubMed), the ISI Web of Knowledge, the Cochrane database, and the references of retrieved articles. The search included articles written in English published prior to September 2008, and identified ten, mainly low-quality, studies. Cohen's d effect size between meditators and controls on stress reduction and spirituality enhancement values were calculated.\n\n\nRESULTS\nMBSR showed a nonspecific effect on stress reduction in comparison to an inactive control, both in reducing stress and in enhancing spirituality values, and a possible specific effect compared to an intervention designed to be structurally equivalent to the meditation program. A direct comparison study between MBSR and standard relaxation training found that both treatments were equally able to reduce stress. Furthermore, MBSR was able to reduce ruminative thinking and trait anxiety, as well as to increase empathy and self-compassion.\n\n\nCONCLUSIONS\nMBSR is able to reduce stress levels in healthy people. However, important limitations of the included studies as well as the paucity of evidence about possible specific effects of MBSR in comparison to other nonspecific treatments underline the necessity of further research.",
"title": ""
}
] | [
{
"docid": "460a296de1bd13378d71ce19ca5d807a",
"text": "Many books discuss applications of data mining. For financial data analysis and financial modeling, see Benninga and Czaczkes [BC00] and Higgins [Hig03]. For retail data mining and customer relationship management, see books by Berry and Linoff [BL04] and Berson, Smith, and Thearling [BST99], and the article by Kohavi [Koh01]. For telecommunication-related data mining, see the book by Mattison [Mat97]. Chen, Hsu, and Dayal [CHD00] reported their work on scalable telecommunication tandem traffic analysis under a data warehouse/OLAP framework. For bioinformatics and biological data analysis, there are a large number of introductory references and textbooks. An introductory overview of bioinformatics for computer scientists was presented by Cohen [Coh04]. Recent textbooks on bioinformatics include Krane and Raymer [KR03], Jones and Pevzner [JP04], Durbin, Eddy, Krogh and Mitchison [DEKM98], Setubal and Meidanis [SM97], Orengo, Jones, and Thornton [OJT03], and Pevzner [Pev03]. Summaries of biological data analysis methods and algorithms can also be found in many other books, such as Gusfield [Gus97], Waterman [Wat95], Baldi and Brunak [BB01], and Baxevanis and Ouellette [BO04]. There are many books on scientific data analysis, such as Grossman, Kamath, Kegelmeyer, et al. (eds.) [GKK01]. For geographic data mining, see the book edited by Miller and Han [MH01]. Valdes-Perez [VP99] discusses the principles of human-computer collaboration for knowledge discovery in science. For intrusion detection, see Barbará [Bar02] and Northcutt and Novak [NN02].",
"title": ""
},
{
"docid": "cf1e0d6a07674aa0b4c078550b252104",
"text": "Industry-practiced agile methods must become an integral part of a software engineering curriculum. It is essential that graduates of such programs seeking careers in industry understand and have positive attitudes toward agile principles. With this knowledge they can participate in agile teams and apply these methods with minimal additional training. However, learning these methods takes experience and practice, both of which are difficult to achieve in a direct manner within the constraints of an academic program. This paper presents a novel, immersive boot camp approach to learning agile software engineering concepts with LEGO® bricks as the medium. Students construct a physical product while inductively learning the basic principles of agile methods. The LEGO®-based approach allows for multiple iterations in an active learning environment. In each iteration, students inductively learn agile concepts through their experiences and mistakes. Subsequent iterations then ground these concepts, visibly leading to an effective process. We assessed this approach using a combination of quantitative and qualitative methods. Our assessment shows that the students demonstrated positive attitudes toward the boot-camp approach compared to lecture-based instruction. However, the agile boot camp did not have an effect on the students' recall on class tests when compared to their recall of concepts taught in lecture-based instruction.",
"title": ""
},
{
"docid": "66844a6bce975f8e3e32358f0e0d1fb7",
"text": "The recent advent of DNA sequencing technologies facilitates the use of genome sequencing data that provide means for more informative and precise classification and identification of members of the Bacteria and Archaea. Because the current species definition is based on the comparison of genome sequences between type and other strains in a given species, building a genome database with correct taxonomic information is of paramount need to enhance our efforts in exploring prokaryotic diversity and discovering novel species as well as for routine identifications. Here we introduce an integrated database, called EzBioCloud, that holds the taxonomic hierarchy of the Bacteria and Archaea, which is represented by quality-controlled 16S rRNA gene and genome sequences. Whole-genome assemblies in the NCBI Assembly Database were screened for low quality and subjected to a composite identification bioinformatics pipeline that employs gene-based searches followed by the calculation of average nucleotide identity. As a result, the database is made of 61 700 species/phylotypes, including 13 132 with validly published names, and 62 362 whole-genome assemblies that were identified taxonomically at the genus, species and subspecies levels. Genomic properties, such as genome size and DNA G+C content, and the occurrence in human microbiome data were calculated for each genus or higher taxa. This united database of taxonomy, 16S rRNA gene and genome sequences, with accompanying bioinformatics tools, should accelerate genome-based classification and identification of members of the Bacteria and Archaea. The database and related search tools are available at www.ezbiocloud.net/.",
"title": ""
},
{
"docid": "d470122d50dbb118ae9f3068998f8e14",
"text": "Tumor heterogeneity presents a challenge for inferring clonal evolution and driver gene identification. Here, we describe a method for analyzing the cancer genome at a single-cell nucleotide level. To perform our analyses, we first devised and validated a high-throughput whole-genome single-cell sequencing method using two lymphoblastoid cell line single cells. We then carried out whole-exome single-cell sequencing of 90 cells from a JAK2-negative myeloproliferative neoplasm patient. The sequencing data from 58 cells passed our quality control criteria, and these data indicated that this neoplasm represented a monoclonal evolution. We further identified essential thrombocythemia (ET)-related candidate mutations such as SESN2 and NTRK1, which may be involved in neoplasm progression. This pilot study allowed the initial characterization of the disease-related genetic architecture at the single-cell nucleotide level. Further, we established a single-cell sequencing method that opens the way for detailed analyses of a variety of tumor types, including those with high genetic complex between patients.",
"title": ""
},
{
"docid": "16560cdfe50fc908ae46abf8b82e620f",
"text": "While there seems to be a general agreement that next years' systems will include many processing cores, it is often overlooked that these systems will also include an increasing number of different cores (we already see dedicated units for graphics or network processing). Orchestrating the diversity of processing functionality is going to be a major challenge in the upcoming years, be it to optimize for performance or for minimal energy consumption.\n We expect field-programmable gate arrays (FPGAs or \"programmable hardware\") to soon play the role of yet another processing unit, found in commodity computers. It is clear that the new resource is going to be too precious to be ignored by database systems, but it is unclear how FPGAs could be integrated into a DBMS. With a focus on database use, this tutorial introduces into the emerging technology, demonstrates its potential, but also pinpoints some challenges that need to be addressed before FPGA-accelerated database systems can go mainstream. Attendees will gain an intuition of an FPGA development cycle, receive guidelines for a \"good\" FPGA design, but also learn the limitations that hardware-implemented database processing faces. Our more high-level ambition is to spur a broader interest in database processing on novel hardware technology.",
"title": ""
},
{
"docid": "08f45368b85de5e6036fd4309f7c7a05",
"text": "Inflammatory bowel disease (IBD) is a group of diseases characterized by inflammation of the small and large intestine and primarily includes ulcerative colitis and Crohn’s disease. Although the etiology of IBD is not fully understood, it is believed to result from the interaction of genetic, immunological, and environmental factors, including gut microbiota. Recent studies have shown a correlation between changes in the composition of the intestinal microbiota and IBD. Moreover, it has been suggested that probiotics and prebiotics influence the balance of beneficial and detrimental bacterial species, and thereby determine homeostasis versus inflammatory conditions. In this review, we focus on recent advances in the understanding of the role of prebiotics, probiotics, and synbiotics in functions of the gastrointestinal tract and the induction and maintenance of IBD remission. We also discuss the role of psychobiotics, which constitute a novel class of psychotropic agents that affect the central nervous system by influencing gut microbiota. (Inflamm Bowel Dis 2015;21:1674–1682)",
"title": ""
},
{
"docid": "8016e80e506dcbae5c85fdabf1304719",
"text": "We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset.",
"title": ""
},
{
"docid": "2545af6c324fa7fb0e766bf6d68dfd90",
"text": "Evidence of aberrant hypothalamic-pituitary-adrenocortical (HPA) activity in many psychiatric disorders, although not universal, has sparked long-standing interest in HPA hormones as biomarkers of disease or treatment response. HPA activity may be chronically elevated in melancholic depression, panic disorder, obsessive-compulsive disorder, and schizophrenia. The HPA axis may be more reactive to stress in social anxiety disorder and autism spectrum disorders. In contrast, HPA activity is more likely to be low in PTSD and atypical depression. Antidepressants are widely considered to inhibit HPA activity, although inhibition is not unanimously reported in the literature. There is evidence, also uneven, that the mood stabilizers lithium and carbamazepine have the potential to augment HPA measures, while benzodiazepines, atypical antipsychotics, and to some extent, typical antipsychotics have the potential to inhibit HPA activity. Currently, the most reliable use of HPA measures in most disorders is to predict the likelihood of relapse, although changes in HPA activity have also been proposed to play a role in the clinical benefits of psychiatric treatments. Greater attention to patient heterogeneity and more consistent approaches to assessing treatment effects on HPA function may solidify the value of HPA measures in predicting treatment response or developing novel strategies to manage psychiatric disease.",
"title": ""
},
{
"docid": "37a8ec11d92dd8a83d757fa27b8f4118",
"text": "Weed control is necessary in rice cultivation, but the excessive use of herbicide treatments has led to serious agronomic and environmental problems. Suitable site-specific weed management (SSWM) is a solution to address this problem while maintaining the rice production quality and quantity. In the context of SSWM, an accurate weed distribution map is needed to provide decision support information for herbicide treatment. UAV remote sensing offers an efficient and effective platform to monitor weeds thanks to its high spatial resolution. In this work, UAV imagery was captured in a rice field located in South China. A semantic labeling approach was adopted to generate the weed distribution maps of the UAV imagery. An ImageNet pre-trained CNN with residual framework was adapted in a fully convolutional form, and transferred to our dataset by fine-tuning. Atrous convolution was applied to extend the field of view of convolutional filters; the performance of multi-scale processing was evaluated; and a fully connected conditional random field (CRF) was applied after the CNN to further refine the spatial details. Finally, our approach was compared with the pixel-based-SVM and the classical FCN-8s. Experimental results demonstrated that our approach achieved the best performance in terms of accuracy. Especially for the detection of small weed patches in the imagery, our approach significantly outperformed other methods. The mean intersection over union (mean IU), overall accuracy, and Kappa coefficient of our method were 0.7751, 0.9445, and 0.9128, respectively. The experiments showed that our approach has high potential in accurate weed mapping of UAV imagery.",
"title": ""
},
{
"docid": "85736b2fd608e3d109ce0f3c46dda9ac",
"text": "The WHO (2001) recommends exclusive breast-feeding and delaying the introduction of solid foods to an infant's diet until 6 months postpartum. However, in many countries, this recommendation is followed by few mothers, and earlier weaning onto solids is a commonly reported global practice. Therefore, this prospective, observational study aimed to assess compliance with the WHO recommendation and examine weaning practices, including the timing of weaning of infants, and to investigate the factors that predict weaning at ≤ 12 weeks. From an initial sample of 539 pregnant women recruited from the Coombe Women and Infants University Hospital, Dublin, 401 eligible mothers were followed up at 6 weeks and 6 months postpartum. Quantitative data were obtained on mothers' weaning practices using semi-structured questionnaires and a short dietary history of the infant's usual diet at 6 months. Only one mother (0.2%) complied with the WHO recommendation to exclusively breastfeed up to 6 months. Ninety-one (22.6%) infants were prematurely weaned onto solids at ≤ 12 weeks with predictive factors after adjustment, including mothers' antenatal reporting that infants should be weaned onto solids at ≤ 12 weeks, formula feeding at 12 weeks and mothers' reporting of the maternal grandmother as the principal source of advice on infant feeding. Mothers who weaned their infants at ≤ 12 weeks were more likely to engage in other sub-optimal weaning practices, including the addition of non-recommended condiments to their infants' foods. Provision of professional advice and exploring antenatal maternal misperceptions are potential areas for targeted interventions to improve compliance with the recommended weaning practices.",
"title": ""
},
{
"docid": "80fe141d88740955f189e8e2bf4c2d89",
"text": "Predictions concerning development, interrelations, and possible independence of working memory, inhibition, and cognitive flexibility were tested in 325 participants (roughly 30 per age from 4 to 13 years and young adults; 50% female). All were tested on the same computerized battery, designed to manipulate memory and inhibition independently and together, in steady state (single-task blocks) and during task-switching, and to be appropriate over the lifespan and for neuroimaging (fMRI). This is one of the first studies, in children or adults, to explore: (a) how memory requirements interact with spatial compatibility and (b) spatial incompatibility effects both with stimulus-specific rules (Simon task) and with higher-level, conceptual rules. Even the youngest children could hold information in mind, inhibit a dominant response, and combine those as long as the inhibition required was steady-state and the rules remained constant. Cognitive flexibility (switching between rules), even with memory demands minimized, showed a longer developmental progression, with 13-year-olds still not at adult levels. Effects elicited only in Mixed blocks with adults were found in young children even in single-task blocks; while young children could exercise inhibition in steady state it exacted a cost not seen in adults, who (unlike young children) seemed to re-set their default response when inhibition of the same tendency was required throughout a block. The costs associated with manipulations of inhibition were greater in young children while the costs associated with increasing memory demands were greater in adults. Effects seen only in RT in adults were seen primarily in accuracy in young children. Adults slowed down on difficult trials to preserve accuracy; but the youngest children were impulsive; their RT remained more constant but at an accuracy cost on difficult trials. Contrary to our predictions of independence between memory and inhibition, when matched for difficulty RT correlations between these were as high as 0.8, although accuracy correlations were less than half that. Spatial incompatibility effects and global and local switch costs were evident in children and adults, differing only in size. Other effects (e.g., asymmetric switch costs and the interaction of switching rules and switching response-sites) differed fundamentally over age.",
"title": ""
},
{
"docid": "0a0cc3c3d3cd7e7c3e8b409554daa5a3",
"text": "Purpose: We investigate the extent of voluntary disclosures in UK higher education institutions’ (HEIs) annual reports and examine whether internal governance structures influence disclosure in the period following major reform and funding constraints. Design/methodology/approach: We adopt a modified version of Coy and Dixon’s (2004) public accountability index, referred to in this paper as a public accountability and transparency index (PATI), to measure the extent of voluntary disclosures in 130 UK HEIs’ annual reports. Informed by a multitheoretical framework drawn from public accountability, legitimacy, resource dependence and stakeholder perspectives, we propose that the characteristics of governing and executive structures in UK universities influence the extent of their voluntary disclosures. Findings: We find a large degree of variability in the level of voluntary disclosures by universities and an overall relatively low level of PATI (44%), particularly with regards to the disclosure of teaching/research outcomes. We also find that audit committee quality, governing board diversity, governor independence, and the presence of a governance committee are associated with the level of disclosure. Finally, we find that the interaction between executive team characteristics and governance variables enhances the level of voluntary disclosures, thereby providing support for the continued relevance of a ‘shared’ leadership in the HEIs’ sector towards enhancing accountability and transparency in HEIs. Research limitations/implications: In spite of significant funding cuts, regulatory reforms and competitive challenges, the level of voluntary disclosure by UK HEIs remains low. Whilst the role of selected governance mechanisms and ‘shared leadership’ in improving disclosure, is asserted, the varying level and selective basis of the disclosures across the surveyed HEIs suggest that the public accountability motive is weaker relative to the other motives underpinned by stakeholder, legitimacy and resource dependence perspectives. Originality/value: This is the first study which explores the association between HEI governance structures, managerial characteristics and the level of disclosure in UK HEIs.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "fe23c80ef28f59066b6574e9c0f8578b",
"text": "Received: 1 September 2008 Revised: 30 May 2009 2nd Revision: 10 October 2009 3rd Revision: 17 December 2009 4th Revision: 28 September 2010 Accepted: 1 November 2010 Abstract This paper applies the technology acceptance model to explore the digital divide and transformational government (t-government) in the United States. Successful t-government is predicated on citizen adoption and usage of e-government services. The contribution of this research is to enhance our understanding of the factors associated with the usage of e-government services among members of a community on the unfortunate side of the divide. A questionnaire was administered to members, of a techno-disadvantaged public housing community and neighboring households, who partook in training or used the community computer lab. The results indicate that perceived access barriers and perceived ease of use (PEOU) are significantly associated with usage, while perceived usefulness (PU) is not. Among the demographic characteristics, educational level, employment status, and household income all have a significant impact on access barriers and employment is significantly associated with PEOU. Finally, PEOU is significantly related to PU. Overall, the results emphasize that t-government cannot cross the digital divide without accompanying employment programs and programs that enhance citizens’ ease in using such services. European Journal of Information Systems (2011) 20, 308–328. doi:10.1057/ejis.2010.64; published online 28 December 2010",
"title": ""
},
{
"docid": "e9676faf7e8d03c64fdcf6aa5e09b008",
"text": "In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.",
"title": ""
},
{
"docid": "d1c4e0da79ceb8893f63aa8ea7c8041c",
"text": "This paper describes the GOLD (Generic Obstacle and Lane Detection) system, a stereo vision-based hardware and software architecture developed to increment road safety of moving vehicles: it allows to detect both generic obstacles (without constraints on symmetry or shape) and the lane position in a structured environment (with painted lane markings). It has been implemented on the PAPRICA system and works at a rate of 10 Hz.",
"title": ""
},
{
"docid": "7a1f409eea5e0ff89b51fe0a26d6db8d",
"text": "A multi-agent system consisting of <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math></inline-formula> agents is considered. The problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied. This problem, referred to as the <italic>multi-agent collision avoidance problem</italic>, is formulated as a differential game. Dynamic feedback strategies that approximate the feedback Nash equilibrium solutions of the differential game are constructed and it is shown that, provided certain assumptions are satisfied, these guarantee that the agents reach their targets while avoiding collisions.",
"title": ""
},
{
"docid": "3e0a731c76324ad0cea438a1d9907b68",
"text": "ance. In addition, the salt composition of the soil water influences the composition of cations on the exchange Due in large measure to the prodigious research efforts of Rhoades complex of soil particles, which influences soil permeand his colleagues at the George E. Brown, Jr., Salinity Laboratory ability and tilth, depending on salinity level and exover the past two decades, soil electrical conductivity (EC), measured changeable cation composition. Aside from decreasing using electrical resistivity and electromagnetic induction (EM), is among the most useful and easily obtained spatial properties of soil crop yield and impacting soil hydraulics, salinity can that influences crop productivity. As a result, soil EC has become detrimentally impact ground water, and in areas where one of the most frequently used measurements to characterize field tile drainage occurs, drainage water can become a disvariability for application to precision agriculture. The value of spatial posal problem as demonstrated in the southern San measurements of soil EC to precision agriculture is widely acknowlJoaquin Valley of central California. edged, but soil EC is still often misunderstood and misinterpreted. From a global perspective, irrigated agriculture makes To help clarify misconceptions, a general overview of the application an essential contribution to the food needs of the world. of soil EC to precision agriculture is presented. The following areas While only 15% of the world’s farmland is irrigated, are discussed with particular emphasis on spatial EC measurements: roughly 35 to 40% of the total supply of food and fiber a brief history of the measurement of soil salinity with EC, the basic comes from irrigated agriculture (Rhoades and Lovetheories and principles of the soil EC measurement and what it actually day, 1990). However, vast areas of irrigated land are measures, an overview of the measurement of soil salinity with various threatened by salinization. Although accurate worldEC measurement techniques and equipment (specifically, electrical wide data are not available, it is estimated that roughly resistivity with the Wenner array and EM), examples of spatial EC half of all existing irrigation systems (totaling about 250 surveys and their interpretation, applications and value of spatial measurements of soil EC to precision agriculture, and current and million ha) are affected by salinity and waterlogging future developments. Precision agriculture is an outgrowth of techno(Rhoades and Loveday, 1990). logical developments, such as the soil EC measurement, which faciliSalinity within irrigated soils clearly limits productivtate a spatial understanding of soil–water–plant relationships. The ity in vast areas of the USA and other parts of the world. future of precision agriculture rests on the reliability, reproducibility, It is generally accepted that the extent of salt-affected and understanding of these technologies. soil is increasing. In spite of the fact that salinity buildup on irrigated lands is responsible for the declining resource base for agriculture, we do not know the exact T predominant mechanism causing the salt accuextent to which soils in our country are salinized, the mulation in irrigated agricultural soils is evapotransdegree to which productivity is being reduced by salinpiration. The salt contained in the irrigation water is ity, the increasing or decreasing trend in soil salinity left behind in the soil as the pure water passes back to development, and the location of contributory sources the atmosphere through the processes of evaporation of salt loading to ground and drainage waters. Suitable and plant transpiration. The effects of salinity are manisoil inventories do not exist and until recently, neither fested in loss of stand, reduced rates of plant growth, did practical techniques to monitor salinity or assess the reduced yields, and in severe cases, total crop failure (Rhoades and Loveday, 1990). Salinity limits water upAbbreviations: EC, electrical conductivity; ECa, apparent soil electritake by plants by reducing the osmotic potential and cal conductivity; ECe, electrical conductivity of the saturated soil paste thus the total soil water potential. Salinity may also extract; ECw, electrical conductivity of soil water; EM, electromagnetic cause specific ion toxicity or upset the nutritional balinduction; EMavg, the geometric mean of the vertical and horizontal electromagnetic induction readings; EMh, electromagnetic induction measurement in the horizontal coil-mode configuration; EMv, electroUSDA-ARS, George E. Brown, Jr., Salinity Lab., 450 West Big magnetic induction measurement in the vertical coil-mode configuraSprings Rd., Riverside, CA 92507-4617. Received 23 Apr. 2001. *Cortion; GIS, geographical information system; GPS, global positioning responding author (dcorwin@ussl.ars.usda.gov). systems; NPS, nonpoint source; SP, saturation percentage; TDR, time domain reflectometry; w, total volumetric water content. Published in Agron. J. 95:455–471 (2003).",
"title": ""
},
{
"docid": "8c301956112a9bfb087ae9921d80134a",
"text": "This paper presents an operation analysis of a high frequency three-level (TL) PWM inverter applied for an induction heating applications. The feature of TL inverter is to achieve zero-voltage switching (ZVS) at above the resonant frequency. The circuit has been modified from the full-bridge inverter to reach high-voltage with low-harmonic output. The device voltage stresses are controlled in a half of the DC input voltage. The prototype operated between 70 and 78 kHz at the DC voltage rating of 580 V can supply the output power rating up to 3000 W. The iron has been heated and hardened at the temperature up to 800degC. In addition, the experiments have been successfully tested and compared with the simulations",
"title": ""
}
] | scidocsrr |
05374ea370531d13bc8a10ee6d514e5c | Application of the Health Belief Model (HBM) in HIV Prevention: A Literature Review | [
{
"docid": "99f62da011921c0ff51daf0c928c865a",
"text": "The Health Belief Model, social learning theory (recently relabelled social cognitive theory), self-efficacy, and locus of control have all been applied with varying success to problems of explaining, predicting, and influencing behavior. Yet, there is conceptual confusion among researchers and practitioners about the interrelationships of these theories and variables. This article attempts to show how these explanatory factors may be related, and in so doing, posits a revised explanatory model which incorporates self-efficacy into the Health Belief Model. Specifically, self-efficacy is proposed as a separate independent variable along with the traditional health belief variables of perceived susceptibility, severity, benefits, and barriers. Incentive to behave (health motivation) is also a component of the model. Locus of control is not included explicitly because it is believed to be incorporated within other elements of the model. It is predicted that the new formulation will more fully account for health-related behavior than did earlier formulations, and will suggest more effective behavioral interventions than have hitherto been available to health educators.",
"title": ""
}
] | [
{
"docid": "6a3042419132c5bf19c5476b9e7e79fe",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii",
"title": ""
},
{
"docid": "f474fd0bce5fa65e79ceb77a17ace260",
"text": "One popular approach to controlling humanoid robots is through inverse kinematics (IK) with stiff joint position tracking. On the other hand, inverse dynamics (ID) based approaches have gained increasing acceptance by providing compliant motions and robustness to external perturbations. However, the performance of such methods is heavily dependent on high quality dynamic models, which are often very difficult to produce for a physical robot. IK approaches only require kinematic models, which are much easier to generate in practice. In this paper, we supplement our previous work with ID-based controllers by adding IK, which helps compensate for modeling errors. The proposed full body controller is applied to three tasks in the DARPA Robotics Challenge (DRC) Trials in Dec. 2013.",
"title": ""
},
{
"docid": "22ea838da6a012a580a79215638834e0",
"text": "There has been a recent surge of success in utilizing Deep Learning (DL) in imaging and speech applications for its relatively automatic feature generation and, in particular for convolutional neural networks (CNNs), high accuracy classification abilities. While these models learn their parameters through data-driven methods, model selection (as architecture construction) through hyper-parameter choices remains a tedious and highly intuition driven task. To address this, Multi-node Evolutionary Neural Networks for Deep Learning (MENNDL) is proposed as a method for automating network selection on computational clusters through hyper-parameter optimization performed via genetic algorithms.",
"title": ""
},
{
"docid": "c58df0eeece5147cce15bcf49f76ba94",
"text": "Recent research has shown that a glial cell of astrocyte underpins a self-repair mechanism in the human brain, where spiking neurons provide direct and indirect feedbacks to presynaptic terminals. These feedbacks modulate the synaptic transmission probability of release (PR). When synaptic faults occur, the neuron becomes silent or near silent due to the low PR of synapses; whereby the PRs of remaining healthy synapses are then increased by the indirect feedback from the astrocyte cell. In this paper, a novel hardware architecture of Self-rePAiring spiking Neural NEtwoRk (SPANNER) is proposed, which mimics this self-repairing capability in the human brain. This paper demonstrates that the hardware can self-detect and self-repair synaptic faults without the conventional components for the fault detection and fault repairing. Experimental results show that SPANNER can maintain the system performance with fault densities of up to 40%, and more importantly SPANNER has only a 20% performance degradation when the self-repairing architecture is significantly damaged at a fault density of 80%.",
"title": ""
},
{
"docid": "81c02e708a21532d972aca0b0afd8bb5",
"text": "We propose a new tree-based ORAM scheme called Circuit ORAM. Circuit ORAM makes both theoretical and practical contributions. From a theoretical perspective, Circuit ORAM shows that the well-known Goldreich-Ostrovsky logarithmic ORAM lower bound is tight under certain parameter ranges, for several performance metrics. Therefore, we are the first to give an answer to a theoretical challenge that remained open for the past twenty-seven years. Second, Circuit ORAM earns its name because it achieves (almost) optimal circuit size both in theory and in practice for realistic choices of block sizes. We demonstrate compelling practical performance and show that Circuit ORAM is an ideal candidate for secure multi-party computation applications.",
"title": ""
},
{
"docid": "d5008ed5c6c41c55759bd87dacb82c08",
"text": "Attestation is a mechanism used by a trusted entity to validate the software integrity of an untrusted platform. Over the past few years, several attestation techniques have been proposed. While they all use variants of a challenge-response protocol, they make different assumptions about what an attacker can and cannot do. Thus, they propose intrinsically divergent validation approaches. We survey in this article the different approaches to attestation, focusing in particular on those aimed at Wireless Sensor Networks. We discuss the motivations, challenges, assumptions, and attacks of each approach. We then organise them into a taxonomy and discuss the state of the art, carefully analysing the advantages and disadvantages of each proposal. We also point towards the open research problems and give directions on how to address them.",
"title": ""
},
{
"docid": "8f7be5526799d96dd13addf382fd4cae",
"text": "Web Phishing (Phishing) uses social engineering technique through short messages, emails and IMs to induce users to visit faked website to get sensitive information. With detecting method for phishing continually proposed and applied, the threat of web phishing has already reduced at a great extent. However, since each type of detection has limitation, phishing attackers can modify their strategies at a relatively low cost to avoid detection accordingly. Facing the defects of current detection, we mainly focus on the behavior pattern of phishing websites. We analyze real IP flows from ISP and propose a detecting method based on Graph Mining with Belief Propagation. The experiment suggested that our algorithm has decent accuracy and runtime efficiency. As we have considered distributed computation while designing the algorithm, it will be easy to replicate our model in popular distributed processing frameworks.",
"title": ""
},
{
"docid": "f2e9083262c2680de3cf756e7960074a",
"text": "Social commerce is a new development in e-commerce generated by the use of social media to empower customers to interact on the Internet. The recent advancements in ICTs and the emergence of Web 2.0 technologies along with the popularity of social media and social networking sites have seen the development of new social platforms. These platforms facilitate the use of social commerce. Drawing on literature from marketing and information systems (IS) the author proposes a new model to develop our underocial media ocial networking site rust LS-SEM standing of social commerce using a PLS-SEM methodology to test the model. Results show that Web 2.0 applications are attracting individuals to have interactions as well as generate content on the Internet. Consumers use social commerce constructs for these activities, which in turn increase the level of trust and intention to buy. Implications, limitations, discussion, and future research directions are discussed at the end of the paper. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c8bc5126cfd10674ba1198400a3c4a48",
"text": "We present a novel method called Contextual Pyramid CNN (CP-CNN) for generating high-quality crowd density and count estimation by explicitly incorporating global and local contextual information of crowd images. The proposed CP-CNN consists of four modules: Global Context Estimator (GCE), Local Context Estimator (LCE), Density Map Estimator (DME) and a Fusion-CNN (F-CNN). GCE is a VGG-16 based CNN that encodes global context and it is trained to classify input images into different density classes, whereas LCE is another CNN that encodes local context information and it is trained to perform patch-wise classification of input images into different density classes. DME is a multi-column architecture-based CNN that aims to generate high-dimensional feature maps from the input image which are fused with the contextual information estimated by GCE and LCE using F-CNN. To generate high resolution and high-quality density maps, F-CNN uses a set of convolutional and fractionally-strided convolutional layers and it is trained along with the DME in an end-to-end fashion using a combination of adversarial loss and pixellevel Euclidean loss. Extensive experiments on highly challenging datasets show that the proposed method achieves significant improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "1e33efef22f44869fd4fe45c3504a2f0",
"text": "H.264/AVC as the most recent video coding standard delivers significantly better performance compared to previous standards, supporting higher video quality over lower bit rate channels. The H.264 in-loop deblocking filter is one of the several complex techniques that have realized this superior coding quality. The deblocking filter is a computationally and data intensive tool resulting in increased execution time of both the encoding and decoding processes. In this paper and in order to reduce the deblocking complexity, we propose a new 2D deblocking filtering algorithm based on the existing 1D method of the H.264/AVC standard. Simulation results indicate that the proposed technique achieves a 40% speed improvement compared to the existing 1D H.264/AVC deblocking filter, while affecting the SNR by 0.15% in average",
"title": ""
},
{
"docid": "5594fc8fec483698265abfe41b3776c9",
"text": "This paper is an abridgement and update of numerous IEEE papers dealing with Squirrel Cage Induction Motor failure analysis. They are the result of a taxonomic study and research conducted by the author during a 40 year career in the motor industry. As the Petrochemical Industry is revolving to reliability based maintenance, increased attention should be given to preventing repeated failures. The Root Cause Failure methodology presented in this paper will assist in this transition. The scope of the product includes Squirrel Cage Induction Motors up to 3000 hp, however, much of this methodology has application to larger sizes and types.",
"title": ""
},
{
"docid": "26a6ba8cba43ddfd3cac0c90750bf4ad",
"text": "Mobile applications usually need to be provided for more than one operating system. Developing native apps separately for each platform is a laborious and expensive undertaking. Hence, cross-platform approaches have emerged, most of them based on Web technologies. While these enable developers to use a single code base for all platforms, resulting apps lack a native look & feel. This, however, is often desired by users and businesses. Furthermore, they have a low abstraction level. We propose MD2, an approach for model-driven cross-platform development of apps. With MD2, developers specify an app in a high-level (domain-specific) language designed for describing business apps succinctly. From this model, purely native apps for Android and iOS are automatically generated. MD2 was developed in close cooperation with industry partners and provides means to develop data-driven apps with a native look and feel. Apps can access the device hardware and interact with remote servers.",
"title": ""
},
{
"docid": "59323291555a82ef99013bd4510b3020",
"text": "This paper aims to classify and analyze recent as well as classic image registration techniques. Image registration is the process of super imposing images of the same scene taken at different times, location and by different sensors. It is a key enabling technology in medical image analysis for integrating and analyzing information from various modalities. Basically image registration finds temporal correspondences between the set of images and uses transformation model to infer features from these correspondences.The approaches for image registration can beclassified according to their nature vizarea-based and feature-based and dimensionalityvizspatial domain and frequency domain. The procedure of image registration by intensity based model, spatial domain transform, Rigid transform and Non rigid transform based on the above mentioned classification has been performed and the eminence of image is measured by the three quality parameters such as SNR, PSNR and MSE. The techniques have been implemented and inferred thatthe non-rigid transform exhibit higher perceptual quality and offer visually sharper image than other techniques.Problematic issues of image registration techniques and outlook for the future research are discussed. This work may be one of the comprehensive reference sources for the researchers involved in image registration.",
"title": ""
},
{
"docid": "0730725975ef43aab73d83cbd8d307c8",
"text": "App stores are one of the most popular ways of providing content to mobile device users today. But with thousands of competing apps and thousands new each day, the problem of presenting the developers' apps to users becomes nontrivial. There may be an app for everything, but if the user cannot find the app they desire, then the app store has failed. This paper investigates app store content organisation using AppEco, an Artificial Life model of mobile app ecosystems. In AppEco, developer agents build and upload apps to the app store; user agents browse the store and download the apps. This paper uses AppEco to investigate how best to organise the Top Apps Chart and New Apps Chart in Apple's iOS App Store. We study the effects of different app ranking algorithms for the Top Apps Chart and the frequency of updates of the New Apps Chart on the download-to-browse ratio. Results show that the effectiveness of the shop front is highly dependent on the speed at which content is updated. A slowly updated New Apps Chart will impact the effectiveness of the Top Apps Chart. A Top Apps Chart that measures success by including too much historical data will also detrimentally affect app downloads.",
"title": ""
},
{
"docid": "c2fb88df12e97e8475bb923063c8a46e",
"text": "This paper addresses the job shop scheduling problem in the presence of machine breakdowns. In this work, we propose to exploit the advantages of data mining techniques to resolve the problem. We proposed an approach to discover a set of classification rules by using historic scheduling data. Intelligent decisions are then made in real time based on this constructed rules to assign the corresponding dispatching rule in a dynamic job shop scheduling environment. A simulation study is conducted at last with the constructed rules and four other dispatching rules from literature. The experimental results verify the performance of classification rule for minimizing mean tardiness.",
"title": ""
},
{
"docid": "7e3bee96d9f3ce9cd46a8d70f9db9b3b",
"text": "In the modern era game of professional and amateur basketball, automated statistical positional marking, referral in actual time and video footage analysis for the same have become a part and parcel for the game. Every pre-match discussion is dominated by extensive study of the opponent's defensive and offensive formation, plays and metrics. A computerized video analysis is required for this reason as it will provide a concise guide for analysis. In addition there is a serious impact on the game due to dubious call by referees on real time judgemental calls. A video analysis will make real time video referrals a possibility. Thus in a nutshell a player positional marking system can generate statistical data for strategic planning and achieve proper rule enforcement. This research presents survey on available sports system which tries to place the point of sportsperson.",
"title": ""
},
{
"docid": "5e840c5649492d5e93ddef2b94432d5f",
"text": "Commercially available laser lithography systems have been available for several years. One such system manufactured by Heidelberg Instruments can be used to produce masks for lithography or to directly pattern photoresist using either a 3 micron or 1 micron beam. These systems are designed to operate using computer aided design (CAD) mask files, but also have the capability of using images. In image mode, the power of the exposure is based on the intensity of each pixel in the image. This results in individual pixels that are the size of the beam, which establishes the smallest feature that can be patterned. When developed, this produces a range of heights within the photoresist which can then be transferred to the material beneath and used for a variety of applications. Previous research efforts have demonstrated that this process works well overall, but is limited in resolution and feature size due to the pixel approach of the exposure. However, if we modify the method used, much smaller features can be resolved, without the pixilation. This is achieved by utilizing multiple exposures of slightly different CAD type files in sequence. While the smallest beam width is approximately 1 micron, the beam positioning accuracy is much smaller, with 40 nm step changes in beam position based on the machine's servo gearing and optical design. When exposing in CAD mode, the beam travels along lines at constant power, so by automating multiple files in succession, and employing multiple smaller exposures of lower intensity, a similar result can be achieved. With this line exposure approach, pixilation can be greatly reduced. Due to the beam positioning accuracy of this mode, the effective resolution between lines is on the order of 40 nm steps, resulting in unexposed features of much smaller size and higher resolution.",
"title": ""
},
{
"docid": "6835adf52ba07062e31f10125b2684fd",
"text": "Nowadays, people use on-line services to conduct various tasks such as on-line shopping and holiday trip planning using web applications. Generally users are required to enter information into web forms to interact with the web applications. However they often have to type in the same information to different web applications repetitively. It could be a tedious job for a user to fill in a large amount of web forms with the same information. To save users from typing redundant information, it is critical to propagate and pre-fill the user's previous inputs across different web applications. However, existing software and approaches cannot meet this urgent need. In this position paper, we propose an intelligent framework to propagate user's inputs across different web applications. Our framework collects user's inputs and analyzes the patterns of user's usage. Furthermore it detects the changes of user's contexts by extracting user's contextual information from various sources such as a user's calender. Our framework clusters the user interface (UI) components to form semantic groups of similar UI components based on our proposed clustering approach. Knowing the similarity relation between UI components, the framework can pre-fill the web forms with user's previous inputs. We conduct a preliminary study on effectiveness of our proposed clustering approach. We achieved a precision of 80% and a recall of 87%.",
"title": ""
},
{
"docid": "250bb85dc0659f21ba8bbaa42b9b30ce",
"text": "The hippocampus and surrounding regions of the medial temporal lobe play a central role in all neuropsychological theories of memory. It is still a matter of debate, however, how best to characterise the functions of these regions, the hippocampus in particular. In this article, I examine the proposal that the hippocampus is a \"stupid\" module whose specific domain is consciously apprehended information. A number of interesting consequences for the organisation of memory and the brain follow from this proposal and the assumptions it entails. These, in turn, have important implications for neuropsychological theories of recent and remote episodic, semantic, and spatial memory and for the functions that episodic memory may serve in perception, comprehension, planning, imagination, and problem solving. I consider these implications by selectively reviewing the literature and primarily drawing on research my collaborators and I have conducted.",
"title": ""
}
] | scidocsrr |
6a5783cc4e6a093f505b017eadcfd23b | Dissociable roles of prefrontal and anterior cingulate cortices in deception. | [
{
"docid": "908716e7683bdc78283600f63bd3a1b0",
"text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.",
"title": ""
}
] | [
{
"docid": "b59965c405937a096186e41b2a3877c3",
"text": "The culmination of many years of increasing research into the toxicity of tau aggregation in neurodegenerative disease has led to the consensus that soluble, oligomeric forms of tau are likely the most toxic entities in disease. While tauopathies overlap in the presence of tau pathology, each disease has a unique combination of symptoms and pathological features; however, most study into tau has grouped tau oligomers and studied them as a homogenous population. Established evidence from the prion field combined with the most recent tau and amyloidogenic protein research suggests that tau is a prion-like protein, capable of seeding the spread of pathology throughout the brain. Thus, it is likely that tau may also form prion-like strains or diverse conformational structures that may differ by disease and underlie some of the differences in symptoms and pathology in neurodegenerative tauopathies. The development of techniques and new technology for the detection of tau oligomeric strains may, therefore, lead to more efficacious diagnostic and treatment strategies for neurodegenerative disease. [Formula: see text].",
"title": ""
},
{
"docid": "b705b194b79133957662c018ea6b1c7a",
"text": "Skew detection has been an important part of the document recognition system. A lot of techniques already exists and has currently been developing for detection of skew of scanned document images. This paper describes the skew detection and correction of scanned document images written in Assamese language using the horizontal and vertical projection profile analysis and brings out the differences after implementation of both the techniques.",
"title": ""
},
{
"docid": "1813c1cefbb5607660626b6c05c41960",
"text": "First described in 1925, giant condyloma acuminatum also known as Buschke-Löwenstein tumor (BLT) is a benign, slow-growing, locally destructive cauliflower-like lesion usually in the genital region. The disease is usually locally aggressive and destructive with a potential for malignant transformation. The causative organism is human papilloma virus. The most common risk factor is immunosuppression with HIV; however, any other cause of immunodeficiency can be a predisposing factor. We present a case of 33-year-old female patient, a known HIV patient on antiretroviral therapy for ten months. She presented with seven-month history of an abnormal growth in the genitalia that was progressive accompanied with foul smelling yellowish discharge and friable. Surgical excision was performed successfully. Pap smear of the excised tissue was negative. Despite being a rare condition, giant condyloma acuminatum is relatively common in HIV-infected patients.",
"title": ""
},
{
"docid": "0763497a09f54e2d49a03e262dcc7b6e",
"text": "Content-based subscription systems are an emerging alternative to traditional publish-subscribe systems, because they permit more flexible subscriptions along multiple dimensions. In these systems, each subscription is a predicate which may test arbitrary attributes within an event. However, the matching problem for content-based systems — determining for each event the subset of all subscriptions whose predicates match the event — is still an open problem. We present an efficient, scalable solution to the matching problem. Our solution has an expected time complexity that is sub-linear in the number of subscriptions, and it has a space complexity that is linear. Specifically, we prove that for predicates reducible to conjunctions of elementary tests, the expected time to match a random event is no greater than O(N ) where N is the number of subscriptions, and is a closed-form expression that depends on the number and type of attributes (in some cases, 1=2). We present some optimizations to our algorithms that improve the search time. We also present the results of simulations that validate the theoretical bounds and that show acceptable performance levels for tens of thousands of subscriptions. Department of Computer Science, Cornell University, Ithaca, N.Y. 14853-7501, aguilera@cs.cornell.edu IBM T.J. Watson Research Center, Yorktown Heights, N.Y. 10598, fstrom, sturman, tusharg@watson.ibm.com Department of Computer Science, University of Illinois at Urbana-Champaign, 1304 W. Springfield Ave, Urbana, I.L. 61801, astley@cs.uiuc.edu",
"title": ""
},
{
"docid": "037fb8eb72b55b8dae1aee107eb6b15c",
"text": "Traditional methods on video summarization are designed to generate summaries for single-view video records, and thus they cannot fully exploit the mutual information in multi-view video records. In this paper, we present a multiview metric learning framework for multi-view video summarization. It combines the advantages of maximum margin clustering with the disagreement minimization criterion. The learning framework thus has the ability to find a metric that best separates the input data, and meanwhile to force the learned metric to maintain underlying intrinsic structure of data points, for example geometric information. Facilitated by such a framework, a systematic solution to the multi-view video summarization problem is developed from the viewpoint of metric learning. The effectiveness of the proposed method is demonstrated by experiments.",
"title": ""
},
{
"docid": "76c31d0f392b81658270805daaff661d",
"text": "One of the major challenges of model-free visual tracking problem has been the difficulty originating from the unpredictable and drastic changes in the appearance of objects we target to track. Existing methods tackle this problem by updating the appearance model on-line in order to adapt to the changes in the appearance. Despite the success of these methods however, inaccurate and erroneous updates of the appearance model result in a tracker drift. In this paper, we introduce a novel visual tracking algorithm based on a template selection strategy constructed by deep reinforcement learning methods. The tracking algorithm utilizes this strategy to choose the best template for tracking a given frame. The template selection strategy is selflearned by utilizing a simple policy gradient method on numerous training episodes randomly generated from a tracking benchmark dataset. Our proposed reinforcement learning framework is generally applicable to other confidence map based tracking algorithms. The experiment shows that our tracking algorithm effectively decides the best template for visual tracking.",
"title": ""
},
{
"docid": "7c974eacb24368a0c5acfeda45d60f64",
"text": "We propose a novel approach for verifying model hypotheses in cluttered and heavily occluded 3D scenes. Instead of verifying one hypothesis at a time, as done by most state-of-the-art 3D object recognition methods, we determine object and pose instances according to a global optimization stage based on a cost function which encompasses geometrical cues. Peculiar to our approach is the inherent ability to detect significantly occluded objects without increasing the amount of false positives, so that the operating point of the object recognition algorithm can nicely move toward a higher recall without sacrificing precision. Our approach outperforms state-of-the-art on a challenging dataset including 35 household models obtained with the Kinect sensor, as well as on the standard 3D object recognition benchmark dataset.",
"title": ""
},
{
"docid": "ac8cef535e5038231cdad324325eaa37",
"text": "There are mainly two types of Emergent Self-Organizing Maps (ESOM) grid structures in use: hexgrid (honeycomb like) and quadgrid (trellis like) maps. In addition to that, the shape of the maps may be square or rectangular. This work investigates the effects of these different map layouts. Hexgrids were found to have no convincing advantage over quadgrids. Rectangular maps, however, are distinctively superior to square maps. Most surprisingly, rectangular maps outperform square maps for isotropic data, i.e. data sets with no particular primary direction.",
"title": ""
},
{
"docid": "921d9dc34f32522200ddcd606d22b6b4",
"text": "The covariancematrix adaptation evolution strategy (CMA-ES) is one of themost powerful evolutionary algorithms for real-valued single-objective optimization. In this paper, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We first introduce a single-objective, elitist CMA-ES using plus-selection and step size control based on a success rule. This algorithm is compared to the standard CMA-ES. The elitist CMA-ES turns out to be slightly faster on unimodal functions, but is more prone to getting stuck in sub-optimal local minima. In the new multi-objective CMAES (MO-CMA-ES) a population of individuals that adapt their search strategy as in the elitist CMA-ES is maintained. These are subject to multi-objective selection. The selection is based on non-dominated sorting using either the crowding-distance or the contributing hypervolume as second sorting criterion. Both the elitist single-objective CMA-ES and the MO-CMA-ES inherit important invariance properties, in particular invariance against rotation of the search space, from the original CMA-ES. The benefits of the new MO-CMA-ES in comparison to the well-known NSGA-II and to NSDE, a multi-objective differential evolution algorithm, are experimentally shown.",
"title": ""
},
{
"docid": "246866da7509b2a8a2bda734a664de9c",
"text": "In this paper we present an approach of procedural game content generation that focuses on a gameplay loops formal language (GLFL). In fact, during an iterative game design process, game designers suggest modifications that often require high development costs. The proposed language and its operational semantic allow reducing the gap between game designers' requirement and game developers' needs, enhancing therefore video games productivity. Using gameplay loops concept for game content generation offers a low cost solution to adjust game challenges, objectives and rewards in video games. A pilot experiment have been conducted to study the impact of this approach on game development.",
"title": ""
},
{
"docid": "1d50d61d6b0abb0d5bec74d613ffe172",
"text": "We propose a novel hardware-accelerated voxelization algorithm for polygonal models. Compared with previous approaches, our algorithm has a major advantage that it guarantees the conservative correctness in voxelization: every voxel intersecting the input model is correctly recognized. This property is crucial for applications like collision detection, occlusion culling and visibility processing. We also present an efficient and robust implementation of the algorithm in the GPU. Experiments show that our algorithm has a lower memory consumption than previous approaches and is more efficient when the volume resolution is high. In addition, our algorithm requires no preprocessing and is suitable for voxelizing deformable models.",
"title": ""
},
{
"docid": "a7e8c3a64f6ba977e142de9b3dae7e57",
"text": "Craniofacial superimposition is a process that aims to identify a person by overlaying a photograph and a model of the skull. This process is usually carried out manually by forensic anthropologists; thus being very time consuming and presenting several difficulties in finding a good fit between the 3D model of the skull and the 2D photo of the face. In this paper we present a fast and automatic procedure to tackle the superimposition problem. The proposed method is based on real-coded genetic algorithms. Synthetic data are used to validate the method. Results on a real case from our Physical Anthropology lab of the University of Granada are also presented.",
"title": ""
},
{
"docid": "220d7b64db1731667e57ed318d2502ce",
"text": "Neutrophils infiltration/activation following wound induction marks the early inflammatory response in wound repair. However, the role of the infiltrated/activated neutrophils in tissue regeneration/proliferation during wound repair is not well understood. Here, we report that infiltrated/activated neutrophils at wound site release pyruvate kinase M2 (PKM2) by its secretive mechanisms during early stages of wound repair. The released extracellular PKM2 facilitates early wound healing by promoting angiogenesis at wound site. Our studies reveal a new and important molecular linker between the early inflammatory response and proliferation phase in tissue repair process.",
"title": ""
},
{
"docid": "1d7ee43299e3a7581d11604f1596aeab",
"text": "We analyze the impact of corruption on bilateral trade, highlighting its dual role in terms of extortion and evasion. Corruption taxes trade, when corrupt customs officials in the importing country extort bribes from exporters (extortion effect); however, with high tariffs, corruption may be trade enhancing when corrupt officials allow exporters to evade tariff barriers (evasion effect). We derive and estimate a corruption-augmented gravity model, where the effect of corruption on trade flows is ambiguous and contingent on tariffs. Empirically, corruption taxes trade in the majority of cases, but in high-tariff environments (covering 5% to 14% of the observations) their marginal effect is trade enhancing.",
"title": ""
},
{
"docid": "da607ab67cb9c1e1d08a70b15f9470d7",
"text": "Network embedding (NE) is playing a critical role in network analysis, due to its ability to represent vertices with efficient low-dimensional embedding vectors. However, existing NE models aim to learn a fixed context-free embedding for each vertex and neglect the diverse roles when interacting with other vertices. In this paper, we assume that one vertex usually shows different aspects when interacting with different neighbor vertices, and should own different embeddings respectively. Therefore, we present ContextAware Network Embedding (CANE), a novel NE model to address this issue. CANE learns context-aware embeddings for vertices with mutual attention mechanism and is expected to model the semantic relationships between vertices more precisely. In experiments, we compare our model with existing NE models on three real-world datasets. Experimental results show that CANE achieves significant improvement than state-of-the-art methods on link prediction and comparable performance on vertex classification. The source code and datasets can be obtained from https://github.com/ thunlp/CANE.",
"title": ""
},
{
"docid": "0962dfe13c1960b345bb0abb480f1520",
"text": "This electronic document presents the application of a novel method of bipedal walking pattern generation assured by “the liquid level model” and the preview control of zero-moment-point (ZMP). In this method, the trajectory of the center of mass (CoM) of the robot is generated assured by the preview controller to maintain the ZMP at the desired location knowing that the robot is modeled as a running liquid level model on a tank. The proposed approach combines the preview control theory with simple model “the liquid level model”, to assure a stable dynamic walking. Simulations results show that the proposed pattern generator guarantee not only to walk dynamically stable but also good performance.",
"title": ""
},
{
"docid": "9415adaa3ec2f7873a23cc2017a2f1ee",
"text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.",
"title": ""
},
{
"docid": "05540e05370b632f8b8cd165ae7d1d29",
"text": "We describe FreeCam a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The FreeCam sensing hardware consists of a small number of static color video cameras and state-of-the-art Kinect depth sensors, and the FreeCam software uses a number of advanced GPU processing and rendering techniques to seamlessly merge the input streams, providing a pleasant user experience. A system such as FreeCam is critical for applications such as telepresence, 3D video-conferencing and interactive 3D TV. FreeCam may also be used to produce multi-view video, which is critical to drive newgeneration autostereoscopic lenticular 3D displays.",
"title": ""
},
{
"docid": "77ce917536f59d5489d0d6f7000c7023",
"text": "In this supplementary document, we present additional results to complement the paper. First, we provide the detailed configurations and parameters of the generator and discriminator in the proposed Generative Adversarial Network. Second, we present the qualitative comparisons with the state-ofthe-art CNN-based optical flow methods. The complete results and source code are publicly available on http://vllab.ucmerced.edu/wlai24/semiFlowGAN.",
"title": ""
}
] | scidocsrr |
e2fd9849b1664bbdf7f8f9130f94ab8a | User Movement Prediction: The Contribution of Machine Learning Techniques | [
{
"docid": "e494f926c9b2866d2c74032d200e4d0a",
"text": "This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm.",
"title": ""
},
{
"docid": "ec06587bff3d5c768ab9083bd480a875",
"text": "Wireless sensor networks are an emerging technology for low-cost, unattended monitoring of a wide range of environments, and their importance has been enforced by the recent delivery of the IEEE 802.15.4 standard for the physical and MAC layers and the forthcoming Zigbee standard for the network and application layers. The fast progress of research on energy efficiency, networking, data management and security in wireless sensor networks, and the need to compare with the solutions adopted in the standards motivates the need for a survey on this field.",
"title": ""
}
] | [
{
"docid": "5483778c0565b3fef8fbc2c4f9769d5d",
"text": "Previous studies of preference for and harmony of color combinations have produced confusing results. For example, some claim that harmony increases with hue similarity, whereas others claim that it decreases. We argue that such confusions are resolved by distinguishing among three types of judgments about color pairs: (1) preference for the pair as a whole, (2) harmony of the pair as a whole, and (3) preference for its figural color when viewed against its colored background. Empirical support for this distinction shows that pair preference and harmony both increase as hue similarity increases, but preference relies more strongly on component color preference and lightness contrast. Although pairs with highly contrastive hues are generally judged to be neither preferable nor harmonious, figural color preference ratings increase as hue contrast with the background increases. The present results thus refine and clarify some of the best-known and most contentious claims of color theorists.",
"title": ""
},
{
"docid": "9f3388eb88e230a9283feb83e4c623e1",
"text": "Entity Linking (EL) is an essential task for semantic text understanding and information extraction. Popular methods separately address the Mention Detection (MD) and Entity Disambiguation (ED) stages of EL, without leveraging their mutual dependency. We here propose the first neural end-to-end EL system that jointly discovers and links entities in a text document. The main idea is to consider all possible spans as potential mentions and learn contextual similarity scores over their entity candidates that are useful for both MD and ED decisions. Key components are context-aware mention embeddings, entity embeddings and a probabilistic mention entity map, without demanding other engineered features. Empirically, we show that our end-to-end method significantly outperforms popular systems on the Gerbil platform when enough training data is available. Conversely, if testing datasets follow different annotation conventions compared to the training set (e.g. queries/ tweets vs news documents), our ED model coupled with a traditional NER system offers the best or second best EL accuracy.",
"title": ""
},
{
"docid": "f92f0a3d46eaf14e478a41f87b8ad369",
"text": "The agricultural productivity of India is gradually declining due to destruction of crops by various natural calamities and the crop rotation process being affected by irregular climate patterns. Also, the interest and efforts put by farmers lessen as they grow old which forces them to sell their agricultural lands, which automatically affects the production of agricultural crops and dairy products. This paper mainly focuses on the ways by which we can protect the crops during an unavoidable natural disaster and implement technology induced smart agro-environment, which can help the farmer manage large fields with less effort. Three common issues faced during agricultural practice are shearing furrows in case of excess rain or flood, manual watering of plants and security against animal grazing. This paper provides a solution for these problems by helping farmer monitor and control various activities through his mobile via GSM and DTMF technology in which data is transmitted from various sensors placed in the agricultural field to the controller and the status of the agricultural parameters are notified to the farmer using which he can take decisions accordingly. The main advantage of this system is that it is semi-automated i.e. the decision is made by the farmer instead of fully automated decision that results in precision agriculture. It also overcomes the existing traditional practices that require high money investment, energy, labour and time.",
"title": ""
},
{
"docid": "89bec90bd6715a3907fba9f0f7655158",
"text": "Long text brings a big challenge to neural network based text matching approaches due to their complicated structures. To tackle the challenge, we propose a knowledge enhanced hybrid neural network (KEHNN) that leverages prior knowledge to identify useful information and filter out noise in long text and performs matching from multiple perspectives. The model fuses prior knowledge into word representations by knowledge gates and establishes three matching channels with words, sequential structures of text given by Gated Recurrent Units (GRUs), and knowledge enhanced representations. The three channels are processed by a convolutional neural network to generate high level features for matching, and the features are synthesized as a matching score by a multilayer perceptron. In this paper, we focus on exploring the use of taxonomy knowledge for text matching. Evaluation results from extensive experiments on public data sets of question answering and conversation show that KEHNN can significantly outperform state-of-the-art matching models and particularly improve matching accuracy on pairs with long text.",
"title": ""
},
{
"docid": "a960d6049c099ec652da81216b3bc173",
"text": "Recent research has illustrated privacy breaches that can be effected on an anonymized dataset by an attacker who has access to auxiliary information about the users. Most of these attack strategies rely on the uniqueness of specific aspects of the users' data - e.g., observing a mobile user at just a few points on the time-location space are sufficient to uniquely identify him/her from an anonymized set of users. In this work, we consider de-anonymization attacks on anonymized summary statistics in the form of histograms. Such summary statistics are useful for many applications that do not need knowledge about exact user behavior. We consider an attacker who has access to an anonymized set of histograms of K users' data and an independent set of data belonging to the same users. Modeling the users' data as i.i.d., we study the composite hypothesis testing problem of identifying the correct matching between the anonymized histograms from the first set and the user data from the second. We propose a Generalized Likelihood Ratio Test as a solution to this problem and show that the solution can be identified using a minimum weight matching algorithm on an K × K complete bipartite weighted graph. We show that a variant of this solution is asymptotically optimal as the data lengths are increased.We apply the algorithm on mobility traces of over 1000 users on EPFL campus collected during two weeks and show that up to 70% of the users can be correctly matched. These results show that anonymized summary statistics of mobility traces themselves contain a significant amount of information that can be used to uniquely identify users by an attacker who has access to auxiliary information about the statistics.",
"title": ""
},
{
"docid": "1d6733d6b017248ef935a833ecfe6f0d",
"text": "Users increasingly rely on crowdsourced information, such as reviews on Yelp and Amazon, and liked posts and ads on Facebook. This has led to a market for blackhat promotion techniques via fake (e.g., Sybil) and compromised accounts, and collusion networks. Existing approaches to detect such behavior relies mostly on supervised (or semi-supervised) learning over known (or hypothesized) attacks. They are unable to detect attacks missed by the operator while labeling, or when the attacker changes strategy. We propose using unsupervised anomaly detection techniques over user behavior to distinguish potentially bad behavior from normal behavior. We present a technique based on Principal Component Analysis (PCA) that models the behavior of normal users accurately and identifies significant deviations from it as anomalous. We experimentally validate that normal user behavior (e.g., categories of Facebook pages liked by a user, rate of like activity, etc.) is contained within a low-dimensional subspace amenable to the PCA technique. We demonstrate the practicality and effectiveness of our approach using extensive ground-truth data from Facebook: we successfully detect diverse attacker strategies—fake, compromised, and colluding Facebook identities—with no a priori labeling while maintaining low false-positive rates. Finally, we apply our approach to detect click-spam in Facebook ads and find that a surprisingly large fraction of clicks are from anomalous users.",
"title": ""
},
{
"docid": "eed515cb3a2a990e67bf76c176c16d29",
"text": "This paper describes the question generation system developed at UPenn for QGSTEC, 2010. The system uses predicate argument structures of sentences along with semantic roles for the question generation task from paragraphs. The semantic role labels are used to identify relevant parts of text before forming questions over them. The generated questions are then ranked to pick final six best questions.",
"title": ""
},
{
"docid": "34523c9ccd5d8c0bec2a84173205be99",
"text": "Deep learning has achieved astonishing results onmany taskswith large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples – often collected online in real-time – and model errors may lead to drastic damages of the system. Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility. The resulting DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time.",
"title": ""
},
{
"docid": "9321905fe504f3a1f5c5e63e92f9d5ec",
"text": "The principles of implementation of the control system with sinusoidal PWM inverter voltage frequency scalar and vector control induction motor are reviewed. Comparisons of simple control system with sinusoidal PWM control system and sinusoidal PWM control with an additional third-harmonic signal and gain modulated control signal are carried out. There are shown the maximum amplitude and actual values phase and line inverter output voltage at the maximum amplitude of the control signals. Recommendations on the choice of supply voltage induction motor electric drive with frequency scalar control are presented.",
"title": ""
},
{
"docid": "41b92e3e2941175cf6d80bf809d7bd32",
"text": "Automated citation analysis (ACA) can be important for many applications including author ranking and literature based information retrieval, extraction, summarization and question answering. In this study, we developed a new compositional attention network (CAN) model to integrate local and global attention representations with a hierarchical attention mechanism. Training on a new benchmark corpus we built, our evaluation shows that the CAN model performs consistently well on both citation classification and sentiment analysis tasks.",
"title": ""
},
{
"docid": "876bbee05b7838f4de218b424d895887",
"text": "Although it is commonplace to assume that the type or level of processing during the input of a verbal item determines the representation of that item in memory, which in turn influences later attempts to store, recognize, or recall that item or similar items, it is much less common to assume that the way in which an item is retrieved from memory is also a potent determiner of that item's subsequent representation in memory. Retrieval from memory is often assumed, implicitly or explicitly, as a process analogous to the way in which the contents of a memory location in a computer are read out, that is, as a process that does not, by itself, modify the state of the retrieved item in memory. In my opinion, however, there is ample evidence for a kind of Heisenberg principle with respect to retrieval processes: an item can seldom, if ever, be retrieved from memory without modifying the representation of that item in memory in significant ways. It is both appropriate and productive, I think, to analyze retrieval processes within the same kind of levels-of-processing framework formulated by Craik and Lockhart ( 1972) with respect to input processes; this chapter is an attempt to do so. In the first of the two main sections below, I explore the extent to which negative-recency phenomena in the long-term recall of a list of items is attributable to differences in levels of retrieval during initial recall. In the second section I present some recent results from ex-",
"title": ""
},
{
"docid": "0a50e10df0a8e4a779de9ed9bf81e442",
"text": "This paper presents a novel self-correction method of commutation point for high-speed sensorless brushless dc motors with low inductance and nonideal back electromotive force (EMF) in order to achieve low steady-state loss of magnetically suspended control moment gyro. The commutation point before correction is obtained by detecting the phase of EMF zero-crossing point and then delaying 30 electrical degrees. Since the speed variation is small between adjacent commutation points, the difference of the nonenergized phase's terminal voltage between the beginning and the end of commutation is mainly related to the commutation error. A novel control method based on model-free adaptive control is proposed, and the delay degree is corrected by the controller in real time. Both the simulation and experimental results show that the proposed correction method can achieve ideal commutation effect within the entire operating speed range.",
"title": ""
},
{
"docid": "f59096137378d49c81bcb1de0be832b2",
"text": "Here the transformation related to the fast Fourier strategy mainly used in the field oriented well effective operations of the strategy elated to the scenario of the design oriented fashion in its implementation related to the well efficient strategy of the processing of the signal in the digital domain plays a crucial role in its analysis point of view in well oriented fashion respectively. It can also be applicable for the processing of the images and there is a crucial in its analysis in terms of the pixel wise process takes place in the system in well effective manner respectively. There is a vast number of the applications oriented strategy takes place in the system in w ell effective manner in the system based implementation followed by the well efficient analysis point of view in well stipulated fashion of the transformation related to the fast Fourier strategy plays a crucial role and some of them includes analysis of the signal, Filtering of the sound and also the compression of the data equations of the partial differential strategy plays a major role and the responsibility in its implementation scenario in a well oriented fashion respectively. There is a huge amount of the efficient analysis of the system related to the strategy of the transformation of the fast Fourier environment plays a crucial role and the responsibility for the effective implementation of the DFT in well respective fashion. Here in the present system oriented strategy DFT implementation takes place in a well explicit manner followed by the well effective analysis of the system where domain related to the time based strategy of the decimation plays a crucial role in its implementation aspect in well effective fashion respectively. Experiments have been conducted on the present method where there is a lot of analysis takes place on the large number of the huge datasets in a well oriented fashion with respect to the different environmental strategy and there is an implementation of the system in a well effective manner in terms of the improvement in the performance followed by the outcome of the entire system in well oriented fashion respectively.",
"title": ""
},
{
"docid": "acf86ba9f98825a032cebb0a98db4360",
"text": "Malware is the root cause of many security threats on the Internet. To cope with the thousands of new malware samples that are discovered every day, security companies and analysts rely on automated tools to extract the runtime behavior of malicious programs. Of course, malware authors are aware of these tools and increasingly try to thwart their analysis techniques. To this end, malware code is often equipped with checks that look for evidence of emulated or virtualized analysis environments. When such evidence is found, the malware program behaves differently or crashes, thus showing a different “personality” than on a real system. Recent work has introduced transparent analysis platforms (such as Ether or Cobra) that make it significantly more difficult for malware programs to detect their presence. Others have proposed techniques to identify and bypass checks introduced by malware authors. Both approaches are often successful in exposing the runtime behavior of malware even when the malicious code attempts to thwart analysis efforts. However, these techniques induce significant performance overhead, especially for fine-grained analysis. Unfortunately, this makes them unsuitable for the analysis of current highvolume malware feeds. In this paper, we present a technique that efficiently detects when a malware program behaves differently in an emulated analysis environment and on an uninstrumented reference host. The basic idea is simple: we just compare the runtime behavior of a sample in our analysis system and on a reference machine. However, obtaining a robust and efficient comparison is very difficult. In particular, our approach consists of recording the interactions of the malware with the operating system in one run and using this information to deterministically replay the program in our analysis environment. Our experiments demonstrate that, by using our approach, one can efficiently detect malware samples that use a variety of techniques to identify emulated analysis environments.",
"title": ""
},
{
"docid": "4a9d14c2fd87d8ab64560adf13c6164c",
"text": "Cepstral coefficients derived either through linear prediction (LP) analysis or from filter bank are perhaps the most commonly used features in currently available speech rec ognition systems. In this paper, we propose spectral subband centroids as new features and use them as supplement to cepstral features for speech rec ognition. We show that these features have properties similar to formant frequencies and they are quite robust to noise. Recognition results are reported in the paper justifying the usefulness of these features as supplementary features.",
"title": ""
},
{
"docid": "48dfee242d5daf501c72e14e6b05c3ba",
"text": "One possible alternative to standard in vivo exposure may be virtual reality exposure. Virtual reality integrates real-time computer graphics, body tracking devices, visual displays, and other sensory input devices to immerse a participant in a computer-generated virtual environment. Virtual reality exposure (VRE) is potentially an efficient and cost-effective treatment of anxiety disorders. VRE therapy has been successful in reducing the fear of heights in the first known controlled study of virtual reality in the treatment of a psychological disorder. Outcome was assessed on measures of anxiety, avoidance, attitudes, and distress. Significant group differences were found on all measures such that the VRE group was significantly improved at posttreatment but the control group was unchanged. The efficacy of virtual reality exposure therapy was also supported for the fear of flying in a case study. The potential for virtual reality exposure treatment for these and other disorders is explored.",
"title": ""
},
{
"docid": "137cb8666a1b5465abf8beaf394e3a30",
"text": "Person re-identification (re-ID) has been gaining in popularity in the research community owing to its numerous applications and growing importance in the surveillance industry. Recent methods often employ partial features for person re-ID and offer fine-grained information beneficial for person retrieval. In this paper, we focus on learning improved partial discriminative features using a deep convolutional neural architecture, which includes a pyramid spatial pooling module for efficient person feature representation. Furthermore, we propose a multi-task convolutional network that learns both personal attributes and identities in an end-to-end framework. Our approach incorporates partial features and global features for identity and attribute prediction, respectively. Experiments on several large-scale person re-ID benchmark data sets demonstrate the accuracy of our approach. For example, we report rank-1 accuracies of 85.37% (+3.47 %) and 92.81% (+0.51 %) on the DukeMTMC re-ID and Market-1501 data sets, respectively. The proposed method shows encouraging improvements compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "b408788cd974438f32c1858cda9ff910",
"text": "Speaking as someone who has personally felt the influence of the “Chomskian Turn”, I believe that one of Chomsky’s most significant contributions to Psychology, or as it is now called, Cognitive Science was to bring back scientific realism. This may strike you as a very odd claim, for one does not usually think of science as needing to be talked into scientific realism. Science is, after all, the study of reality by the most precise instruments of measurement and analysis that humans have developed.",
"title": ""
},
{
"docid": "a4af2c561f340c52629478cac5e691d3",
"text": "The Internet has always been a means of communication between people, but with the technological development and changing requirements and lifestyle, this network has become a tool of communication between things of all types and sizes, and is known as Internet of things (IoT) for this reason.\n One of the most promising applications of IoT technology is the automated irrigation systems. The aim of this paper is to propose a methodology of the implementation of wireless sensor networks as an IoT device to develop a smart irrigation management system powered by solar energy.",
"title": ""
},
{
"docid": "678a4872dfe753bac26bff2b29ac26b0",
"text": "Cyber-physical systems (CPS), such as automotive systems, are starting to include sophisticated machine learning (ML) components. Their correctness, therefore, depends on properties of the inner ML modules. While learning algorithms aim to generalize from examples, they are only as good as the examples provided, and recent efforts have shown that they can produce inconsistent output under small adversarial perturbations. This raises the question: can the output from learning components can lead to a failure of the entire CPS? In this work, we address this question by formulating it as a problem of falsifying signal temporal logic (STL) specifications for CPS with ML components. We propose a compositional falsification framework where a temporal logic falsifier and a machine learning analyzer cooperate with the aim of finding falsifying executions of the considered model. The efficacy of the proposed technique is shown on an automatic emergency braking system model with a perception component based on deep neural networks.",
"title": ""
}
] | scidocsrr |
954a411bf58312459ac38b4b9d4d3bf1 | Foresight: Rapid Data Exploration Through Guideposts | [
{
"docid": "299242a092512f0e9419ab6be13f9b93",
"text": "In this paper, we present ForeCache, a general-purpose tool for exploratory browsing of large datasets. ForeCache utilizes a client-server architecture, where the user interacts with a lightweight client-side interface to browse datasets, and the data to be browsed is retrieved from a DBMS running on a back-end server. We assume a detail-on-demand browsing paradigm, and optimize the back-end support for this paradigm by inserting a separate middleware layer in front of the DBMS. To improve response times, the middleware layer fetches data ahead of the user as she explores a dataset.\n We consider two different mechanisms for prefetching: (a) learning what to fetch from the user's recent movements, and (b) using data characteristics (e.g., histograms) to find data similar to what the user has viewed in the past. We incorporate these mechanisms into a single prediction engine that adjusts its prediction strategies over time, based on changes in the user's behavior. We evaluated our prediction engine with a user study, and found that our dynamic prefetching strategy provides: (1) significant improvements in overall latency when compared with non-prefetching systems (430% improvement); and (2) substantial improvements in both prediction accuracy (25% improvement) and latency (88% improvement) relative to existing prefetching techniques.",
"title": ""
},
{
"docid": "6103a365705a6083e40bb0ca27f6ca78",
"text": "Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.",
"title": ""
},
{
"docid": "467c2a106b6fd5166f3c2a44d655e722",
"text": "AutoVis is a data viewer that responds to content – text, relational tables, hierarchies, streams, images – and displays the information appropriately (that is, as an expert would). Its design rests on the grammar of graphics, scagnostics and a modeler based on the logic of statistical analysis. We distinguish an automatic visualization system (AVS) from an automated visualization system. The former automatically makes decisions about what is to be visualized. The latter is a programming system for automating the production of charts, graphs and visualizations. An AVS is designed to provide a first glance at data before modeling and analysis are done. AVS is designed to protect researchers from ignoring missing data, outliers, miscodes and other anomalies that can violate statistical assumptions or otherwise jeopardize the validity of models. The design of this system incorporates several unique features: (1) a spare interface – analysts simply drag a data source into an empty window, (2) a graphics generator that requires no user definitions to produce graphs, (3) a statistical analyzer that protects users from false conclusions, and (4) a pattern recognizer that responds to the aspects (density, shape, trend, and so on) that professional statisticians notice when investigating data sets.",
"title": ""
}
] | [
{
"docid": "4d1ae6893fa8b19d05da5794a3fb7978",
"text": "This study analyzes the influence of IT governance on IT investment performance. IT investment performance is known to vary widely across firms. Prior studies find that the variations are often due to the lack of investments in complementary organizational capitals. The presence of complementarities between IT and organizational capitals suggests that IT investment decisions should be made at the right organizational level to ensure that both IT and organizational factors are taken into consideration. IT governance, which determines the allocation of IT decision rights within a firm, therefore, plays an important role in IT investment performance. This study tests this proposition by using a sample dataset from Fortune 1000 firms. A key challenge in this study is that the appropriate IT governance mode varies across firms as well as across business units within a firm. We address this challenge by developing an empirical model of IT governance that is based on earlier studies on multiple contingency factors of IT governance. We use the empirical model to predict the appropriate IT governance mode for each business unit within a firm and use the difference between the predicted and observed IT governance mode to derive a measure of a firm’s IT governance misalignment. We find that firms with high IT governance misalignment receive no benefits from their IT investments; whereas firms with low IT governance misalignment obtain two to three times the value from their IT investments compared to firms with average IT governance misalignment. Our results highlight the importance of IT governance in realizing value from IT investments and confirm the validity of using the multiple contingency factor model in assessing IT governance decisions.",
"title": ""
},
{
"docid": "972ef2897c352ad384333dd88588f0e6",
"text": "We describe a vision-based obstacle avoidance system for of f-road mobile robots. The system is trained from end to end to map raw in put images to steering angles. It is trained in supervised mode t predict the steering angles provided by a human driver during training r uns collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two f orwardpointing wireless color cameras. A remote computer process es the video and controls the robot via radio. The learning system is a lar ge 6-layer convolutional network whose input is a single left/right pa ir of unprocessed low-resolution images. The robot exhibits an excell ent ability to detect obstacles and navigate around them in real time at spe ed of 2 m/s.",
"title": ""
},
{
"docid": "0f0305afce53933df1153af6a31c09fb",
"text": "In the study of indoor simultaneous localization and mapping (SLAM) problems using a stereo camera, two types of primary features-point and line segments-have been widely used to calculate the pose of the camera. However, many feature-based SLAM systems are not robust when the camera moves sharply or turns too quickly. In this paper, an improved indoor visual SLAM method to better utilize the advantages of point and line segment features and achieve robust results in difficult environments is proposed. First, point and line segment features are automatically extracted and matched to build two kinds of projection models. Subsequently, for the optimization problem of line segment features, we add minimization of angle observation in addition to the traditional re-projection error of endpoints. Finally, our model of motion estimation, which is adaptive to the motion state of the camera, is applied to build a new combinational Hessian matrix and gradient vector for iterated pose estimation. Furthermore, our proposal has been tested on EuRoC MAV datasets and sequence images captured with our stereo camera. The experimental results demonstrate the effectiveness of our improved point-line feature based visual SLAM method in improving localization accuracy when the camera moves with rapid rotation or violent fluctuation.",
"title": ""
},
{
"docid": "9c9e3261c293aedea006becd2177a6d5",
"text": "This paper proposes a motion-focusing method to extract key frames and generate summarization synchronously for surveillance videos. Within each pre-segmented video shot, the proposed method focuses on one constant-speed motion and aligns the video frames by fixing this focused motion into a static situation. According to the relative motion theory, the other objects in the video are moving relatively to the selected kind of motion. This method finally generates a summary image containing all moving objects and embedded with spatial and motional information, together with key frames to provide details corresponding to the regions of interest in the summary image. We apply this method to the lane surveillance system and the results provide us a new way to understand the video efficiently.",
"title": ""
},
{
"docid": "36874bcbbea1563542265cf2c6261ede",
"text": "Given the tremendous growth of online videos, video thumbnail, as the common visualization form of video content, is becoming increasingly important to influence user's browsing and searching experience. However, conventional methods for video thumbnail selection often fail to produce satisfying results as they ignore the side semantic information (e.g., title, description, and query) associated with the video. As a result, the selected thumbnail cannot always represent video semantics and the click-through rate is adversely affected even when the retrieved videos are relevant. In this paper, we have developed a multi-task deep visual-semantic embedding model, which can automatically select query-dependent video thumbnails according to both visual and side information. Different from most existing methods, the proposed approach employs the deep visual-semantic embedding model to directly compute the similarity between the query and video thumbnails by mapping them into a common latent semantic space, where even unseen query-thumbnail pairs can be correctly matched. In particular, we train the embedding model by exploring the large-scale and freely accessible click-through video and image data, as well as employing a multi-task learning strategy to holistically exploit the query-thumbnail relevance from these two highly related datasets. Finally, a thumbnail is selected by fusing both the representative and query relevance scores. The evaluations on 1,000 query-thumbnail dataset labeled by 191 workers in Amazon Mechanical Turk have demonstrated the effectiveness of our proposed method.",
"title": ""
},
{
"docid": "48b78cae830b76b85c5205a9728244be",
"text": "The striking ability of music to elicit emotions assures its prominent status in human culture and every day life. Music is often enjoyed and sought for its ability to induce or convey emotions, which may manifest in anything from a slight variation in mood, to changes in our physical condition and actions. Consequently, research on how we might associate musical pieces with emotions and, more generally, how music brings about an emotional response is attracting ever increasing attention. First, this paper provides a thorough review of studies on the relation of music and emotions from di↵erent disciplines. We then propose new insights to enhance automated music emotion recognition models using recent results from psychology, musicology, a↵ective computing, semantic technologies and music information retrieval.",
"title": ""
},
{
"docid": "709c06739d20fe0a5ba079b21e5ad86d",
"text": "Bug triaging refers to the process of assigning a bug to the most appropriate developer to fix. It becomes more and more difficult and complicated as the size of software and the number of developers increase. In this paper, we propose a new framework for bug triaging, which maps the words in the bug reports (i.e., the term space) to their corresponding topics (i.e., the topic space). We propose a specialized topic modeling algorithm named <italic> multi-feature topic model (MTM)</italic> which extends Latent Dirichlet Allocation (LDA) for bug triaging. <italic>MTM </italic> considers product and component information of bug reports to map the term space to the topic space. Finally, we propose an incremental learning method named <italic>TopicMiner</italic> which considers the topic distribution of a new bug report to assign an appropriate fixer based on the affinity of the fixer to the topics. We pair <italic> TopicMiner</italic> with MTM (<italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math> <alternatives><inline-graphic xlink:href=\"xia-ieq1-2576454.gif\"/></alternatives></inline-formula></italic>). We have evaluated our solution on 5 large bug report datasets including GCC, OpenOffice, Mozilla, Netbeans, and Eclipse containing a total of 227,278 bug reports. We show that <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\"> $^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq2-2576454.gif\"/></alternatives></inline-formula> </italic> can achieve top-1 and top-5 prediction accuracies of 0.4831-0.6868, and 0.7686-0.9084, respectively. We also compare <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives> <inline-graphic xlink:href=\"xia-ieq3-2576454.gif\"/></alternatives></inline-formula></italic> with Bugzie, LDA-KL, SVM-LDA, LDA-Activity, and Yang et al.'s approach. The results show that <italic>TopicMiner<inline-formula> <tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq4-2576454.gif\"/> </alternatives></inline-formula></italic> on average improves top-1 and top-5 prediction accuracies of Bugzie by 128.48 and 53.22 percent, LDA-KL by 262.91 and 105.97 percent, SVM-LDA by 205.89 and 110.48 percent, LDA-Activity by 377.60 and 176.32 percent, and Yang et al.'s approach by 59.88 and 13.70 percent, respectively.",
"title": ""
},
{
"docid": "4cc3f3a5e166befe328b6e18bc836e89",
"text": "Virtual human characters are found in a broad range of applications, from movies, games and networked virtual environments to teleconferencing and tutoring applications. Such applications are available on a variety of platforms, from desktop and web to mobile devices. High-quality animation is an essential prerequisite for realistic and believable virtual characters. Though researchers and application developers have ample animation techniques for virtual characters at their disposal, implementation of these techniques into an existing application tends to be a daunting and time-consuming task. In this paper we present visage|SDK, a versatile framework for real-time character animation based on MPEG-4 FBA standard that offers a wide spectrum of features that includes animation playback, lip synchronization and facial motion tracking, while facilitating rapid production of art assets and easy integration with existing graphics engines.",
"title": ""
},
{
"docid": "002fe3efae0fc9f88690369496ce5e7d",
"text": "Experimental evidence suggests that emotions can both speed-up and slow-down the internal clock. Speeding up has been observed for to-be-timed emotional stimuli that have the capacity to sustain attention, whereas slowing down has been observed for to-be-timed neutral stimuli that are presented in the context of emotional distractors. These effects have been explained by mechanisms that involve changes in bodily arousal, attention, or sentience. A review of these mechanisms suggests both merits and difficulties in the explanation of the emotion-timing link. Therefore, a hybrid mechanism involving stimulus-specific sentient representations is proposed as a candidate for mediating emotional influences on time. According to this proposal, emotional events enhance sentient representations, which in turn support temporal estimates. Emotional stimuli with a larger share in ones sentience are then perceived as longer than neutral stimuli with a smaller share.",
"title": ""
},
{
"docid": "782346defc00d03c61fb8f694d612653",
"text": "We present PrologCheck, an automatic tool for propertybased testing of programs in the logic programming language Prolog with randomised test data generation. The tool is inspired by the well known QuickCheck, originally designed for the functional programming language Haskell. It includes features that deal with specific characteristics of Prolog such as its relational nature (as opposed to Haskell) and the absence of a strong type discipline. PrologCheck expressiveness stems from describing properties as Prolog goals. It enables the definition of custom test data generators for random testing tailored for the property to be tested. Further, it allows the use of a predicate specification language that supports types, modes and constraints on the number of successful computations. We evaluate our tool on a number of examples and apply it successfully to debug a Prolog library for AVL search trees.",
"title": ""
},
{
"docid": "ba4df2305d4f292a6ee0f033e58d7a16",
"text": "Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is evaluated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors vary from 1.58 to 2.17 cm.",
"title": ""
},
{
"docid": "a931f939e2e0c0f2f8940796ee23e957",
"text": "PURPOSE OF REVIEW\nMany patients requiring cardiac arrhythmia device surgery are on chronic oral anticoagulation therapy. The periprocedural management of their anticoagulation presents a dilemma to physicians, particularly in the subset of patients with moderate-to-high risk of arterial thromboembolic events. Physicians have responded by treating patients with bridging anticoagulation while oral anticoagulation is temporarily discontinued. However, there are a number of downsides to bridging anticoagulation around device surgery; there is a substantial risk of significant device pocket hematoma with important clinical sequelae; bridging anticoagulation may lead to more arterial thromboembolic events and bridging anticoagulation is expensive.\n\n\nRECENT FINDINGS\nIn response to these issues, a number of centers have explored the option of performing device surgery without cessation of oral anticoagulation. The observational data suggest a greatly reduced hematoma rate with this strategy. Despite these encouraging results, most physicians are reluctant to move to operating on continued Coumadin in the absence of confirmatory data from a randomized trial.\n\n\nSUMMARY\nWe have designed a prospective, single-blind, randomized, controlled trial to address this clinical question. In the conventional arm, patients will be bridged. In the experimental arm, patients will continue on oral anticoagulation and the primary outcome is clinically significant hematoma. Our study has clinical relevance to at least 70 000 patients per year in North America.",
"title": ""
},
{
"docid": "c196444f2093afc3092f85b8fbb67da5",
"text": "The objective of this paper is to evaluate “human action recognition without human”. Motion representation is frequently discussed in human action recognition. We have examined several sophisticated options, such as dense trajectories (DT) and the two-stream convolutional neural network (CNN). However, some features from the background could be too strong, as shown in some recent studies on human action recognition. Therefore, we considered whether a background sequence alone can classify human actions in current large-scale action datasets (e.g., UCF101). In this paper, we propose a novel concept for human action analysis that is named “human action recognition without human”. An experiment clearly shows the effect of a background sequence for understanding an action label.",
"title": ""
},
{
"docid": "8b45d7f55e7968a203da2eb09c712858",
"text": "The importance of demonstrating the value achieved from IT investments is long established in the Computer Science (CS) and Information Systems (IS) literature. However, emerging technologies such as the ever-changing complex area of cloud computing present new challenges and opportunities for demonstrating how IT investments lead to business value. This paper conducts a multidisciplinary systematic literature review drawing from CS, IS, and Business disciplines to understand the current evidence on the quantification of financial value from cloud computing investments. The study identified 53 articles, which were coded in an analytical framework across six themes (measurement type, costs, benefits, adoption type, actor and service model). Future research directions were presented for each theme. The review highlights the need for multi-disciplinary research which both explores and further develops the conceptualization of value in cloud computing research, and research which investigates how IT value manifests itself across the chain of service provision and in inter-organizational scenarios.",
"title": ""
},
{
"docid": "fd2e7025271565927f43784f0c69c3fb",
"text": "In this paper, we have proposed a fingerprint orientation model based on 2D Fourier expansions (FOMFE) in the phase plane. The FOMFE does not require prior knowledge of singular points (SPs). It is able to describe the overall ridge topology seamlessly, including the SP regions, even for noisy fingerprints. Our statistical experiments on a public database show that the proposed FOMFE can significantly improve the accuracy of fingerprint feature extraction and thus that of fingerprint matching. Moreover, the FOMFE has a low-computational cost and can work very efficiently on large fingerprint databases. The FOMFE provides a comprehensive description for orientation features, which has enabled its beneficial use in feature-related applications such as fingerprint indexing. Unlike most indexing schemes using raw orientation data, we exploit FOMFE model coefficients to generate the feature vector. Our indexing experiments show remarkable results using different fingerprint databases",
"title": ""
},
{
"docid": "4bf253b2349978d17fd9c2400df61d21",
"text": "This paper proposes an architecture for the mapping between syntax and phonology – in particular, that aspect of phonology that determines the linear ordering of words. We propose that linearization is restricted in two key ways. (1) the relative ordering of words is fixed at the end of each phase, or ‘‘Spell-out domain’’; and (2) ordering established in an earlier phase may not be revised or contradicted in a later phase. As a consequence, overt extraction out of a phase P may apply only if the result leaves unchanged the precedence relations established in P. We argue first that this architecture (‘‘cyclic linearization’’) gives us a means of understanding the reasons for successive-cyclic movement. We then turn our attention to more specific predictions of the proposal: in particular, the e¤ects of Holmberg’s Generalization on Scandinavian Object Shift; and also the Inverse Holmberg Effects found in Scandinavian ‘‘Quantifier Movement’’ constructions (Rögnvaldsson (1987); Jónsson (1996); Svenonius (2000)) and in Korean scrambling configurations (Ko (2003, 2004)). The cyclic linearization proposal makes predictions that cross-cut the details of particular syntactic configurations. For example, whether an apparent case of verb fronting results from V-to-C movement or from ‘‘remnant movement’’ of a VP whose complements have been removed by other processes, the verb should still be required to precede its complements after fronting if it preceded them before fronting according to an ordering established at an earlier phase. We argue that ‘‘cross-construction’’ consistency of this sort is in fact found.",
"title": ""
},
{
"docid": "95db9ce9faaf13e8ff8d5888a6737683",
"text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: boydce1@auburn.edu Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:",
"title": ""
},
{
"docid": "99bac31f4d0df12cf25f081c96d9a81a",
"text": "Residual networks, which use a residual unit to supplement the identity mappings, enable very deep convolutional architecture to operate well, however, the residual architecture has been proved to be diverse and redundant, which may leads to low-efficient modeling. In this work, we propose a competitive squeeze-excitation (SE) mechanism for the residual network. Re-scaling the value for each channel in this structure will be determined by the residual and identity mappings jointly, and this design enables us to expand the meaning of channel relationship modeling in residual blocks. Modeling of the competition between residual and identity mappings cause the identity flow to control the complement of the residual feature maps for itself. Furthermore, we design a novel inner-imaging competitive SE block to shrink the consumption and re-image the global features of intermediate network structure, by using the inner-imaging mechanism, we can model the channel-wise relations with convolution in spatial. We carry out experiments on the CIFAR, SVHN, and ImageNet datasets, and the proposed method can challenge state-of-the-art results.",
"title": ""
},
{
"docid": "f0846b4e74110ed469704c4a24407cc6",
"text": "Presently, a very large number of public and private data sets are available from local governments. In most cases, they are not semantically interoperable and a huge human effort would be needed to create integrated ontologies and knowledge base for smart city. Smart City ontology is not yet standardized, and a lot of research work is needed to identify models that can easily support the data reconciliation, the management of the complexity, to allow the data reasoning. In this paper, a system for data ingestion and reconciliation of smart cities related aspects as road graph, services available on the roads, traffic sensors etc., is proposed. The system allows managing a big data volume of data coming from a variety of sources considering both static and dynamic data. These data are mapped to a smart-city ontology, called KM4City (Knowledge Model for City), and stored into an RDF-Store where they are available for applications via SPARQL queries to provide new services to the users via specific applications of public administration and enterprises. The paper presents the process adopted to produce the ontology and the big data architecture for the knowledge base feeding on the basis of open and private data, and the mechanisms adopted for the data verification, reconciliation and validation. Some examples about the possible usage of the coherent big data knowledge base produced are also offered and are accessible from the RDF-store and related services. The article also presented the work performed about reconciliation algorithms and their comparative assessment and selection. & 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).",
"title": ""
}
] | scidocsrr |
cfe1dd6ca8441b2c694ac3d856e9f5fb | Using boosted trees for click-through rate prediction for sponsored search | [
{
"docid": "3734fd47cf4e4e5c00f660cbb32863f0",
"text": "We describe a new Bayesian click-through rate (CTR) prediction algorithm used for Sponsored Search in Microsoft’s Bing search engine. The algorithm is based on a probit regression model that maps discrete or real-valued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored search where the predictions made by the algorithm decide about future training sample composition. Finally, we show experimental results from the production system and compare to a calibrated Naïve Bayes algorithm.",
"title": ""
}
] | [
{
"docid": "7b6cf139cae3e9dae8a2886ddabcfef0",
"text": "An enhanced automated material handling system (AMHS) that uses a local FOUP buffer at each tool is presented as a method of enabling lot size reduction and parallel metrology sampling in the photolithography (litho) bay. The local FOUP buffers can be integrated with current OHT AMHS systems in existing fabs with little or no change to the AMHS or process equipment. The local buffers enhance the effectiveness of the OHT by eliminating intermediate moves to stockers, increasing the move rate capacity by 15-20%, and decreasing the loadport exchange time to 30 seconds. These enhancements can enable the AMHS to achieve the high move rates compatible with lot size reduction down to 12-15 wafers per FOUP. The implementation of such a system in a photolithography bay could result in a 60-74% reduction in metrology delay time, which is the time between wafer exposure at a litho tool and collection of metrology and inspection data.",
"title": ""
},
{
"docid": "9c9e1458740337c7b074710297a386a8",
"text": "Seed dormancy is an innate seed property that defines the environmental conditions in which the seed is able to germinate. It is determined by genetics with a substantial environmental influence which is mediated, at least in part, by the plant hormones abscisic acid and gibberellins. Not only is the dormancy status influenced by the seed maturation environment, it is also continuously changing with time following shedding in a manner determined by the ambient environment. As dormancy is present throughout the higher plants in all major climatic regions, adaptation has resulted in divergent responses to the environment. Through this adaptation, germination is timed to avoid unfavourable weather for subsequent plant establishment and reproductive growth. In this review, we present an integrated view of the evolution, molecular genetics, physiology, biochemistry, ecology and modelling of seed dormancy mechanisms and their control of germination. We argue that adaptation has taken place on a theme rather than via fundamentally different paths and identify similarities underlying the extensive diversity in the dormancy response to the environment that controls germination.",
"title": ""
},
{
"docid": "d791a5d7a113a5d789452e664669570c",
"text": "Cloud computing is a new way of delivering computing resources and is not a new technology. It is an internet based service delivery model which provides internet based services, computing and storage for users in all markets including financial health care and government. This new economic model for computing has found fertile ground and is attracting massive global investment. Although the benefits of cloud computing are clear, so is the need to develop proper security for cloud implementations. Cloud security is becoming a key differentiator and competitive edge between cloud providers. This paper discusses the security issues that arise in a cloud computing frame work. It focuses on technical security issues arising from the usage of cloud services and also provides an overview of key security issues related to cloud computing with the view of a secure cloud architecture environment.",
"title": ""
},
{
"docid": "463d0bca287f0bd00585b4c96d12d014",
"text": "In this paper, we present a novel approach to extract songlevel descriptors built from frame-level timbral features such as Mel-frequency cepstral coefficient (MFCC). These descriptors are called identity vectors or i-vectors and are the results of a factor analysis procedure applied on framelevel features. The i-vectors provide a low-dimensional and fixed-length representation for each song and can be used in a supervised and unsupervised manner. First, we use the i-vectors for an unsupervised music similarity estimation, where we calculate the distance between i-vectors in order to predict the genre of songs. Second, for a supervised artist classification task we report the performance measures using multiple classifiers trained on the i-vectors. Standard datasets for each task are used to evaluate our method and the results are compared with the state of the art. By only using timbral information, we already achieved the state of the art performance in music similarity (which uses extra information such as rhythm). In artist classification using timbre descriptors, our method outperformed the state of the art.",
"title": ""
},
{
"docid": "c678ea5e9bc8852ec80a8315a004c7f0",
"text": "Educators, researchers, and policy makers have advocated student involvement for some time as an essential aspect of meaningful learning. In the past twenty years engineering educators have implemented several means of better engaging their undergraduate students, including active and cooperative learning, learning communities, service learning, cooperative education, inquiry and problem-based learning, and team projects. This paper focuses on classroom-based pedagogies of engagement, particularly cooperative and problem-based learning. It includes a brief history, theoretical roots, research support, summary of practices, and suggestions for redesigning engineering classes and programs to include more student engagement. The paper also lays out the research ahead for advancing pedagogies aimed at more fully enhancing students’ involvement in their learning.",
"title": ""
},
{
"docid": "a4a4c67e0ca81a099f58146fccc5a2eb",
"text": "Chinese calligraphy is among the finest and most important of all Chinese art forms and an inseparable part of Chinese history. Its delicate aesthetic effects are generally considered to be unique among all calligraphic arts. Its subtle power is integral to traditional Chinese painting. A novel intelligent system uses a constraint-based analogous-reasoning process to automatically generate original Chinese calligraphy that meets visually aesthetic requirements. We propose an intelligent system that can automatically create novel, aesthetically appealing Chinese calligraphy from a few training examples of existing calligraphic styles. To demonstrate the proposed methodology's feasibility, we have implemented a prototype system that automatically generates new Chinese calligraphic art from a small training set.",
"title": ""
},
{
"docid": "87e050b5ae29487cb9cbdbbe672010ea",
"text": "The goal of data mining is to extract or “mine” knowledge from large amounts of data. However, data is often collected by several different sites. Privacy, legal and commercial concerns restrict centralized access to this data, thus derailing data mining projects. Recently, there has been growing focus on finding solutions to this problem. Several algorithms have been proposed that do distributed knowledge discovery, while providing guarantees on the non-disclosure of data. Vertical partitioning of data is an important data distribution model often found in real life. Vertical partitioning or heterogeneous distribution implies that different features of the same set of data are collected by different sites. In this chapter we survey some of the methods developed in the literature to mine vertically partitioned data without violating privacy and discuss challenges and complexities specific to vertical partitioning.",
"title": ""
},
{
"docid": "5fafb56408b75344fe7e55260a758180",
"text": "This paper presents a new conversion method to automatically transform a constituent-based Vietnamese Treebank into dependency trees. On a dependency Treebank created according to our new approach, we examine two stateof-the-art dependency parsers: the MSTParser and the MaltParser. Experiments show that the MSTParser outperforms the MaltParser. To the best of our knowledge, we report the highest performances published to date in the task of dependency parsing for Vietnamese. Particularly, on gold standard POS tags, we get an unlabeled attachment score of 79.08% and a labeled attachment score of 71.66%.",
"title": ""
},
{
"docid": "ac1cf73b0f59279d02611239781af7c1",
"text": "This paper presents V3, an unsupervised system for aspect-based Sentiment Analysis when evaluated on the SemEval 2014 Task 4. V3 focuses on generating a list of aspect terms for a new domain using a collection of raw texts from the domain. We also implement a very basic approach to classify the aspect terms into categories and assign polarities to them.",
"title": ""
},
{
"docid": "99582c5c50f5103f15a6777af94c6584",
"text": "Depth estimation in computer vision and robotics is most commonly done via stereo vision (stereopsis), in which images from two cameras are used to triangulate and estimate distances. However, there are also numerous monocular visual cues— such as texture variations and gradients, defocus, color/haze, etc.—that have heretofore been little exploited in such systems. Some of these cues apply even in regions without texture, where stereo would work poorly. In this paper, we apply a Markov Random Field (MRF) learning algorithm to capture some of these monocular cues, and incorporate them into a stereo system. We show that by adding monocular cues to stereo (triangulation) ones, we obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone. This holds true for a large variety of environments, including both indoor environments and unstructured outdoor environments containing trees/forests, buildings, etc. Our approach is general, and applies to incorporating monocular cues together with any off-the-shelf stereo system.",
"title": ""
},
{
"docid": "32b860121b49bd3a61673b3745b7b1fd",
"text": "Online reviews are a growing market, but it is struggling with fake reviews. They undermine both the value of reviews to the user, and their trust in the review sites. However, fake positive reviews can boost a business, and so a small industry producing fake reviews has developed. The two sides are facing an arms race that involves more and more natural language processing (NLP). So far, NLP has been used mostly for detection, and works well on human-generated reviews. But what happens if NLP techniques are used to generate fake reviews as well? We investigate the question in an adversarial setup, by assessing the detectability of different fake-review generation strategies. We use generative models to produce reviews based on meta-information, and evaluate their effectiveness against deceptiondetection models and human judges. We find that meta-information helps detection, but that NLP-generated reviews conditioned on such information are also much harder to detect than conventional ones.",
"title": ""
},
{
"docid": "196868f85571b16815127d2bd87b98ff",
"text": "Scientists have predicted that carbon’s immediate neighbors on the periodic chart, boron and nitrogen, may also form perfect nanotubes, since the advent of carbon nanotubes (CNTs) in 1991. First proposed then synthesized by researchers at UC Berkeley in the mid 1990’s, the boron nitride nanotube (BNNT) has proven very difficult to make until now. Herein we provide an update on a catalyst-free method for synthesizing highly crystalline, small diameter BNNTs with a high aspect ratio using a high power laser under a high pressure and high temperature environment first discovered jointly by NASA/NIA JSA. Progress in purification methods, dispersion studies, BNNT mat and composite formation, and modeling and diagnostics will also be presented. The white BNNTs offer extraordinary properties including neutron radiation shielding, piezoelectricity, thermal oxidative stability (> 800 ̊C in air), mechanical strength, and toughness. The characteristics of the novel BNNTs and BNNT polymer composites and their potential applications are discussed.",
"title": ""
},
{
"docid": "96fb1910ed0127ad330fd427335b4587",
"text": "OBJECTIVES\nThe aim of this cross-sectional in vivo study was to assess the effect of green tea and honey solutions on the level of salivary Streptococcus mutans.\n\n\nSTUDY DESIGN\nA convenient sample of 30 Saudi boys aged 7-10 years were randomly assigned into 2 groups of 15 each. Saliva sample was collected for analysis of level of S. mutans before rinsing. Commercial honey and green tea were prepared for use and each child was asked to rinse for two minutes using 10 mL of the prepared honey or green tea solutions according to their group. Saliva samples were collected again after rinsing. The collected saliva samples were prepared and colony forming unit (CFU) of S. mutans per mL of saliva was calculated.\n\n\nRESULTS\nThe mean number of S. mutans before and after rinsing with honey and green tea solutions were 2.28* 10(8)(2.622*10(8)), 5.64 *10(7)(1.03*10(8)), 1.17*10(9)(2.012*10(9)) and 2.59*10(8) (3.668*10(8)) respectively. A statistically significant reduction in the average number of S. mutans at baseline and post intervention in the children who were assigned to the honey (P=0.001) and green tea (P=0.001) groups was found.\n\n\nCONCLUSIONS\nA single time mouth rinsing with honey and green tea solutions for two minutes effectively reduced the number of salivary S. mutans of 7-10 years old boys.",
"title": ""
},
{
"docid": "59ef2705492241fbe588e36c77f142bc",
"text": "A reciprocal frame (RF) is a self-supported three-dimensional structure made up of three or more sloping rods, which form a closed circuit, namely an RF-unit. Large RF-structures built as complex grillages of one or a few similar RF-units have an intrinsic beauty derived from their inherent self-similar and highly symmetric patterns. Designing RF-structures that span over large domains is an intricate and complex task. In this paper, we present an interactive computational tool for designing RF-structures over a 3D guiding surface, focusing on the aesthetic aspect of the design.\n There are three key contributions in this work. First, we draw an analogy between RF-structures and plane tiling with regular polygons, and develop a computational scheme to generate coherent RF-tessellations from simple grammar rules. Second, we employ a conformal mapping to lift the 2D tessellation over a 3D guiding surface, allowing a real-time preview and efficient exploration of wide ranges of RF design parameters. Third, we devise an optimization method to guarantee the collinearity of contact joints along each rod, while preserving the geometric properties of the RF-structure. Our tool not only supports the design of wide variety of RF pattern classes and their variations, but also allows preview and refinement through interactive controls.",
"title": ""
},
{
"docid": "1a9fc19eb416eebdbfe1110c37e0852b",
"text": "Two important aspects of switched-mode (Class-D) amplifiers providing a high signal to noise ratio (SNR) for mechatronic applications are investigated. Signal jitter is common in digital systems and introduces noise, leading to a deterioration of the SNR. Hence, a jitter elimination technique for the transistor gate signals in power electronic converters is presented and verified. Jitter is reduced tenfold as compared to traditional approaches to values of 25 ps at the output of the power stage. Additionally, digital modulators used for the generation of the switch control signals can only achieve a limited resolution (and hence, limited SNR) due to timing constraints in digital circuits. Consequently, a specialized modulator structure based on noise shaping is presented and optimized which enables the creation of high-resolution switch control signals. This, together with the jitter reduction circuit, enables half-bridge output voltage SNR values of more than 100dB in an open-loop system.",
"title": ""
},
{
"docid": "6c68bccf376da1f963aaa8ec5e08b646",
"text": "The composition of the gut microbiota is in constant flow under the influence of factors such as the diet, ingested drugs, the intestinal mucosa, the immune system, and the microbiota itself. Natural variations in the gut microbiota can deteriorate to a state of dysbiosis when stress conditions rapidly decrease microbial diversity and promote the expansion of specific bacterial taxa. The mechanisms underlying intestinal dysbiosis often remain unclear given that combinations of natural variations and stress factors mediate cascades of destabilizing events. Oxidative stress, bacteriophages induction and the secretion of bacterial toxins can trigger rapid shifts among intestinal microbial groups thereby yielding dysbiosis. A multitude of diseases including inflammatory bowel diseases but also metabolic disorders such as obesity and diabetes type II are associated with intestinal dysbiosis. The characterization of the changes leading to intestinal dysbiosis and the identification of the microbial taxa contributing to pathological effects are essential prerequisites to better understand the impact of the microbiota on health and disease.",
"title": ""
},
{
"docid": "4124c4c838d0c876f527c021a2c58358",
"text": "Early disease detection is a major challenge in agriculture field. Hence proper measures has to be taken to fight bioagressors of crops while minimizing the use of pesticides. The techniques of machine vision are extensively applied to agricultural science, and it has great perspective especially in the plant protection field,which ultimately leads to crops management. Our goal is early detection of bioagressors. The paper describes a software prototype system for pest detection on the infected images of different leaves. Images of the infected leaf are captured by digital camera and processed using image growing, image segmentation techniques to detect infected parts of the particular plants. Then the detected part is been processed for futher feature extraction which gives general idea about pests. This proposes automatic detection and calculating area of infection on leaves of a whitefly (Trialeurodes vaporariorum Westwood) at a mature stage.",
"title": ""
},
{
"docid": "36911701bcf6029eb796bac182e5aa4c",
"text": "In this paper, we describe the approaches taken in the 4WARD project to address the challenges of the network of the future. Our main hypothesis is that the Future Internet must allow for the fast creation of diverse network designs and paradigms, and must also support their co-existence at run-time. We observe that a pure evolutionary path from the current Internet design will not be able to address, in a satisfactory manner, major issues like the handling of mobile users, information access and delivery, wide area sensor network applications, high management complexity, and malicious traffic that hamper network performance already today. Moreover, the Internetpsilas focus on interconnecting hosts and delivering bits has to be replaced by a more holistic vision of a network of information and content. This is a natural evolution of scope requiring nonetheless a re-design of the architecture. We describe how 4WARD directs research on network virtualisation, novel InNetworkManagement, a generic path concept, and an information centric approach, into a single framework for a diversified, but interoperable, network of the future.",
"title": ""
},
{
"docid": "d8c5ff196db9acbea12e923b2dcef276",
"text": "MoS<sub>2</sub>-graphene-based hybrid structures are biocompatible and useful in the field of biosensors. Herein, we propose a heterostructured MoS<sub>2</sub>/aluminum (Al) film/MoS<sub>2</sub>/graphene as a highly sensitive surface plasmon resonance (SPR) biosensor based on the Otto configuration. The sensitivity of the proposed biosensor is enhanced by using three methods. First, prisms of different refractive index have been discussed and it is found that sensitivity can be enhanced by using a low refractive index prism. Second, the influence of the thickness of the air layer on the sensitivity is analyzed and the optimal thickness of air is obtained. Finally, the sensitivity improvement and mechanism by using molybdenum disulfide (MoS<sub>2</sub>)–graphene hybrid structure is revealed. The maximum sensitivity ∼ 190.83°/RIU is obtained with six layers of MoS<sub>2</sub> coating on both surfaces of Al thin film.",
"title": ""
},
{
"docid": "9b49a4673456ab8e9f14a0fe5fb8bcc7",
"text": "Legged robots offer the potential to navigate a wide variety of terrains that are inaccessible to wheeled vehicles. In this paper we consider the planning and control tasks of navigating a quadruped robot over a wide variety of challenging terrain, including terrain which it has not seen until run-time. We present a software architecture that makes use of both static and dynamic gaits, as well as specialized dynamic maneuvers, to accomplish this task. Throughout the paper we highlight two themes that have been central to our approach: 1) the prevalent use of learning algorithms, and 2) a focus on rapid recovery and replanning techniques; we present several novel methods and algorithms that we developed for the quadruped and that illustrate these two themes. We evaluate the performance of these different methods, and also present and discuss the performance of our system on the official Learning Locomotion tests.",
"title": ""
}
] | scidocsrr |
93fd1f38fd71e79ae01f79e17fd24eea | Microesthetic dental analysis in parents of children with oral clefts | [
{
"docid": "0f66b62ddfd89237bb62fb6b60a7551a",
"text": "BACKGROUND\nClinicians' expanding use of cosmetic restorative procedures has generated greater interest in the determination of esthetic guidelines and standards. The overall esthetic impact of a smile can be divided into four specific areas: gingival esthetics, facial esthetics, microesthetics and macroesthetics. In this article, the authors focus on the principles of macroesthetics, which represents the relationships and ratios of relating multiple teeth to each other, to soft tissue and to facial characteristics.\n\n\nCASE DESCRIPTION\nThe authors categorize macroesthetic criteria based on two reference points: the facial midline and the amount and position of tooth reveal. The facial midline is a critical reference position for determining multiple design criteria. The amount and position of tooth reveal in various views and lip configurations also provide valuable guidelines in determining esthetic tooth positions and relationships.\n\n\nCLINICAL IMPLICATIONS\nEsthetics is an inherently subjective discipline. By understanding and applying simple esthetic rules, tools and strategies, dentists have a basis for evaluating natural dentitions and the results of cosmetic restorative procedures. Macroesthetic components of teeth and their relationship to each other can be influenced to produce more natural and esthetically pleasing restorative care.",
"title": ""
}
] | [
{
"docid": "d97b4905e1e06e521fe797df7499a521",
"text": "This paper studied a remote control system based on the LabVIEW and ZLG PCI-5110 CAN card, in which students could perform experiments by remote control laboratory via the Internet. Due to the fact that the internet becomes more integrated into our daily lives, several possibilities have arisen to use this cost-effective worldwide standard for distributing data. National Instruments LabVIEW is available to publish data from the development environment to the Web. The student can access the remote laboratory and perform experiments without any limitation of time and location. They can also observe the signals by changing the parameters of the experiment and evaluating the results. During the session, the teacher can watch and communicate with students who perform their experiment. The usefulness of remote laboratory in teaching environments is already known: it saves equipment, personnel for the institution and it saves time and money for the remote students. It also allows the same equipment to be used in research purposes by many teams, through Internet. The experiments proved the feasibility of technical solutions, as well as the correctness of implementation in this paper.",
"title": ""
},
{
"docid": "ff826e50f789d4e47f30ec22396c365d",
"text": "In present Scenario of the world, Internet has almost reached to every aspect of our lives. Due to this, most of the information sharing and communication is carried out using web. With such rapid development of Internet technology, a big issue arises of unauthorized access to confidential data, which leads to utmost need of information security while transmission. Cryptography and Steganography are two of the popular techniques used for secure transmission. Steganography is more reliable over cryptography as it embeds secret data within some cover material. Unlike cryptography, Steganography is not for keeping message hidden from intruders but it does not allow anyone to know that hidden information even exist in communicated material, as the transmitted material looks like any normal message which seem to be of no use for intruders. Although, Steganography covers many types of covers to hide data like text, image, audio, video and protocols but recent developments focuses on Image Steganography due to its large data hiding capacity and difficult identification, also due to their greater scope and bulk sharing within social networks. A large number of techniques are available to hide secret data within digital images such as LSB, ISB, and MLSB etc. In this paper, a detailed review will be presented on Image Steganography and also different data hiding and security techniques using digital images with their scope and features.",
"title": ""
},
{
"docid": "c1eb1bded65ad62c395183318622ab76",
"text": "The CHiME challenge series aims to advance far field speech recognition technology by promoting research at the interface of signal processing and automatic speech recognition. This paper presents the design and outcomes of the 3rd CHiME Challenge, which targets the performance of automatic speech recognition in a real-world, commercially-motivated scenario: a person talking to a tablet device that has been fitted with a six-channel microphone array. The paper describes the data collection, the task definition and the baseline systems for data simulation, enhancement and recognition. The paper then presents an overview of the 26 systems that were submitted to the challenge focusing on the strategies that proved to be most successful relative to the MVDR array processing and DNN acoustic modeling reference system. Challenge findings related to the role of simulated data in system training and evaluation are discussed.",
"title": ""
},
{
"docid": "8519922a8cbb71f4c9ba8959731ce61d",
"text": "Convolutional neural networks (CNNs) have recently been applied successfully in large scale image classification competitions for photographs found on the Internet. As our brains are able to recognize objects in the images, there must be some regularities in the data that a neural network can utilize. These regularities are difficult to find an explicit set of rules for. However, by using a CNN and the backpropagation algorithm for learning, the neural network can learn to pick up on the features in the images that are characteristic for each class. Also, data regularities that are not visually obvious to us can be learned. CNNs are particularly useful for classifying data containing some spatial structure, like photographs and speech. In this paper, the technique is tested on SAR images of ships in harbour. The tests indicate that CNNs are promising methods for discriminating between targets in SAR images. However, the false alarm rate is quite high when introducing confusers in the tests. A big challenge in the development of target classification algorithms, especially in the case of SAR, is the lack of real data. This paper also describes tests using simulated SAR images of the same target classes as the real data in order to fill this data gap. The simulated images are made with the MOCEM software (developed by DGA), based on CAD models of the targets. The tests performed here indicate that simulated data can indeed be helpful in training a convolutional neural network to classify real SAR images.",
"title": ""
},
{
"docid": "640b6328fe2a44d56fa9d7d2bf61798d",
"text": "This paper describes our participation in SemEval-2015 Task 12, and the opinion mining system sentiue. The general idea is that systems must determine the polarity of the sentiment expressed about a certain aspect of a target entity. For slot 1, entity and attribute category detection, our system applies a supervised machine learning classifier, for each label, followed by a selection based on the probability of the entity/attribute pair, on that domain. The target expression detection, for slot 2, is achieved by using a catalog of known targets for each entity type, complemented with named entity recognition. In the opinion sentiment slot, we used a 3 class polarity classifier, having BoW, lemmas, bigrams after verbs, presence of polarized terms, and punctuation based features. Working in unconstrained mode, our results for slot 1 were assessed with precision between 57% and 63%, and recall varying between 42% and 47%. In sentiment polarity, sentiue’s result accuracy was approximately 79%, reaching the best score in 2 of the 3 domains.",
"title": ""
},
{
"docid": "b9224781eb15a69a9fb5772522c0dbbe",
"text": "Paper-based medical documents are still widely used in many countries, while the contents within are difficult for patients to store and manage. In contrast, electronic medical documents not only help solve these problems, but also promote the development of telemedicine and medical big data. Thus, how to transform traditional printed medical documents into electronic ones becomes a key issue. It is worth noting that recognizing Chinese medical document in image form is a challenging task, as there are a variety of characters and symbols, including Greek alphabets, mathematical symbols and so on. The structure of Chinese characters is also often intricate. At present, the popular Optical Character Recognition methods are designed for single-scale characters, which tend to have poor performance in those complex scenarios. Based on Convolutional Recurrent Neural Network (CRNN), this paper proposes a multiscale architecture to recognize multi-lingual characters. To verify the effectiveness, the model is trained on a synthetic dataset and evaluated on a real Chinese medical document dataset. The experimental results demonstrate that the proposed method achieves substantial improvement over the recent methods.",
"title": ""
},
{
"docid": "5969b69858c7f7e7836db2f9d1276b87",
"text": "Intelligent tutoring systems (ITSs) acquire rich data about students behavior during learning; data mining techniques can help to describe, interpret and predict student behavior, and to evaluate progress in relation to learning outcomes. This paper surveys a variety of data mining techniques for analyzing how students interact with ITSs, including methods for handling hidden state variables, and for testing hypotheses. To illustrate these methods we draw on data from two ITSs for math instruction. Educational datasets provide new challenges to the data mining community, including inducing action patterns, designing distance metrics, and inferring unobservable states associated with learning.",
"title": ""
},
{
"docid": "1f2f6aab0e3c813392ecab46cdc171b5",
"text": "Theory of mind (ToM) refers to the ability to represent one's own and others' cognitive and affective mental states. Recent imaging studies have aimed to disentangle the neural networks involved in cognitive as opposed to affective ToM, based on clinical observations that the two can functionally dissociate. Due to large differences in stimulus material and task complexity findings are, however, inconclusive. Here, we investigated the neural correlates of cognitive and affective ToM in psychologically healthy male participants (n = 39) using functional brain imaging, whereby the same set of stimuli was presented for all conditions (affective, cognitive and control), but associated with different questions prompting either a cognitive or affective ToM inference. Direct contrasts of cognitive versus affective ToM showed that cognitive ToM recruited the precuneus and cuneus, as well as regions in the temporal lobes bilaterally. Affective ToM, in contrast, involved a neural network comprising prefrontal cortical structures, as well as smaller regions in the posterior cingulate cortex and the basal ganglia. Notably, these results were complemented by a multivariate pattern analysis (leave one study subject out), yielding a classifier with an accuracy rate of more than 85% in distinguishing between the two ToM-conditions. The regions contributing most to successful classification corresponded to those found in the univariate analyses. The study contributes to the differentiation of neural patterns involved in the representation of cognitive and affective mental states of others.",
"title": ""
},
{
"docid": "908804746cfc32ebe0f10e05f99d2c56",
"text": "Bigdataanalyticswiththecloudcomputingareoneoftheemergingareaforprocessing andanalytics.Fogcomputingistheparadigmwherefogdeviceshelptoreducelatency andincreasethroughputforassistingattheedgeoftheclient.Thisarticlediscusses theemergenceoffogcomputingformininganalyticsinbigdatafromgeospatialand medicalhealthapplications.Thisarticleproposesanddevelopsafogcomputing-based framework,i.e.FogLearn.ThisisfortheapplicationofK-meansclusteringinGanga RiverBasinManagementandreal-worldfeaturedatafordetectingdiabetespatients sufferingfromdiabetesmellitus.Theproposedarchitectureemploysmachinelearning onadeeplearningframeworkfortheanalysisofpathologicalfeaturedatathatobtained fromsmartwatcheswornbythepatientswithdiabetesandgeographicalparameters ofRiverGangabasingeospatialdatabase.Theresultsshowthatfogcomputingholds animmensepromisefortheanalysisofmedicalandgeospatialbigdata. KeywoRDS Cloud Computing, Clustering, Diabetes, Fog Computing, Geospatial Big Data, Geospatial Data, K-Means, Medical Big Data, River, Visualization International Journal of Fog Computing Volume 1 • Issue 1 • January-June 2018",
"title": ""
},
{
"docid": "5a2be4e590d31b0cb553215f11776a15",
"text": "This paper presents a review of the state of the art and a discussion on vertical take-off and landing (VTOL) unmanned aerial vehicles (UAVs) applied to the inspection of power utility assets and other similar civil applications. The first part of the paper presents the authors' view on specific benefits and operation constraints associated with the use of UAVs in power industry applications. The second part cites more than 70 recent publications related to this field of application. Among them, some present complete technologies while others deal with specific subsystems relevant to the application of such mobile platforms to power line inspection. The authors close with a discussion of key factors for successful application of VTOL UAVs to power industry infrastructure inspection.",
"title": ""
},
{
"docid": "bc3f64571ac833049e95994c675df26a",
"text": "Effective Poisson–Nernst–Planck (PNP) equations are derived for ion transport in charged porous media under forced convection (periodic flow in the frame of the mean velocity) by an asymptotic multiscale expansion with drift. The homogenized equations provide a modeling framework for engineering while also addressing fundamental questions about electrodiffusion in charged porous media, relating to electroneutrality, tortuosity, ambipolar diffusion, Einstein’s relation, and hydrodynamic dispersion. The microscopic setting is a two-component periodic composite consisting of a dilute electrolyte continuum (described by standard PNP equations) and a continuous dielectric matrix, which is impermeable to the ions and carries a given surface charge. As a first approximation for forced convection, the electrostatic body force on the fluid and electro-osmotic flows are neglected. Four new features arise in the upscaled equations: (i) the effective ionic diffusivities and mobilities become tensors, related to the microstructure; (ii) the effective permittivity is also a tensor, depending on the electrolyte/matrix permittivity ratio and the ratio of the Debye screening length to the macroscopic length of the porous medium; (iii) the microscopic convection leads to a diffusion-dispersion correction in the effective diffusion tensor; and (iv) the surface charge per volume appears as a continuous “background charge density,” as in classical membrane models. The coefficient tensors in the upscaled PNP equations can be calculated from periodic reference cell problems. For an insulating solid matrix, all gradients are corrected by the same tensor, and the Einstein relation holds at the macroscopic scale, which is not generally the case for a polarizable matrix, unless the permittivity and electric field are suitably defined. In the limit of thin double layers, Poisson’s equation is replaced by macroscopic electroneutrality (balancing ionic and surface charges). The general form of the macroscopic PNP equations may also hold for concentrated solution theories, based on the local-density and mean-field approximations. These results have broad applicability to ion transport in porous electrodes, separators, membranes, ion-exchange resins, soils, porous rocks, and biological tissues.",
"title": ""
},
{
"docid": "9f3df90362c3fcba3130de916282361c",
"text": "There has been substantial recent interest in annotation schemes that can be applied consistently to many languages. Building on several recent efforts to unify morphological and syntactic annotation, the Universal Dependencies (UD) project seeks to introduce a cross-linguistically applicable part-of-speech tagset, feature inventory, and set of dependency relations as well as a large number of uniformly annotated treebanks. We present Universal Dependencies for Finnish, one of the ten languages in the recent first release of UD project treebank data. We detail the mapping of previously introduced annotation to the UD standard, describing specific challenges and their resolution. We additionally present parsing experiments comparing the performance of a stateof-the-art parser trained on a languagespecific annotation schema to performance on the corresponding UD annotation. The results show improvement compared to the source annotation, indicating that the conversion is accurate and supporting the feasibility of UD as a parsing target. The introduced tools and resources are available under open licenses from http://bionlp.utu.fi/ud-finnish.html.",
"title": ""
},
{
"docid": "a3bce6c544a08e48a566a189f66d0131",
"text": "Model-free episodic reinforcement learning problems define the environment reward with functions that often provide only sparse information throughout the task. Consequently, agents are not given enough feedback about the fitness of their actions until the task ends with success or failure. Previous work addresses this problem with reward shaping. In this paper we introduce a novel approach to improve modelfree reinforcement learning agents’ performance with a three step approach. Specifically, we collect demonstration data, use the data to recover a linear function using inverse reinforcement learning and we use the recovered function for potential-based reward shaping. Our approach is model-free and scalable to high dimensional domains. To show the scalability of our approach we present two sets of experiments in a two dimensional Maze domain, and the 27 dimensional Mario AI domain. We compare the performance of our algorithm to previously introduced reinforcement learning from demonstration algorithms. Our experiments show that our approach outperforms the state-of-the-art in cumulative reward, learning rate and asymptotic performance.",
"title": ""
},
{
"docid": "643e083415859324c1fdd58e050d30b5",
"text": "In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discriminative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.",
"title": ""
},
{
"docid": "833095fbc8c06c5698521420e1aa6a3b",
"text": "In the last two decades, Computer Aided Detection (CAD) systems were developed to help radiologists analyse screening mammograms, however benefits of current CAD technologies appear to be contradictory, therefore they should be improved to be ultimately considered useful. Since 2012, deep convolutional neural networks (CNN) have been a tremendous success in image recognition, reaching human performance. These methods have greatly surpassed the traditional approaches, which are similar to currently used CAD solutions. Deep CNN-s have the potential to revolutionize medical image analysis. We propose a CAD system based on one of the most successful object detection frameworks, Faster R-CNN. The system detects and classifies malignant or benign lesions on a mammogram without any human intervention. The proposed method sets the state of the art classification performance on the public INbreast database, AUC = 0.95. The approach described here has achieved 2nd place in the Digital Mammography DREAM Challenge with AUC = 0.85. When used as a detector, the system reaches high sensitivity with very few false positive marks per image on the INbreast dataset. Source code, the trained model and an OsiriX plugin are published online at https://github.com/riblidezso/frcnn_cad.",
"title": ""
},
{
"docid": "6a6f7493f38248b06fe67039143bda82",
"text": "Time series forecasting techniques have been widely applied in domains such as weather forecasting, electric power demand forecasting, earthquake forecasting, and financial market forecasting. Because of the fact that these time series are affected by a multitude of interrelating macroscopic and microscopic variables, the underlying models that generate these time series are nonlinear and extremely complex. Therefore, it is computationally infeasible to develop full-scale models with the present computing technology. Therefore, researchers have resorted to smaller-scale models that require frequent recalibration. Despite advances in forecasting technology over the past few decades, there have not been algorithms that can consistently produce accurate forecasts with statistical significance. This is mainly because state-of-the-art forecasting algorithms essentially perform single-horizon forecasts and produce continuous numbers as outputs. This paper proposes a novel multi-horizon ternary forecasting algorithm that forecasts whether a time series is heading for an uptrend or downtrend, or going sideways. The proposed system utilizes a cascade of support vector machines, each of which is trained to forecast a specific horizon. Individual forecasts of these support vector machines are combined to form an extrapolated time series. A higher level forecasting system then forward-runs the extrapolated time series and then forecasts the future trend of the input time series in accordance with some volatility measure. Experiments have been carried out on some datasets. Over these datasets, this system achieves accuracy rates well above the baseline accuracy rate, implying statistical significance. The experimental results demonstrate the efficacy of our framework.",
"title": ""
},
{
"docid": "6be843d8364038473a850420d549702f",
"text": "The modernization of Global Positioning Systems (GPS) and the availability of more complex signals and modulation schemes boost the development of civil and military applications while the accuracy and coverage of receivers continually improve. Recently, software defined receiver solutions gained attention for flexible multimode operations. For them, developers address algorithmic and hardware accelerators or their hybrids for fast prototyping and testing high performance receivers for various conditions. This paper presents a new fast prototyping concept exploiting digital signal processor (DSP) peripherals and the benefits of the host environment using the National Instruments (NI) LabVIEW platform. With a reasonable distribution of tasks between the host hardware and reconfigurable peripherals, a higher performance is achieved. As a case study, in this paper the Texas Instruments (TI) TMS320C6713 DSP is used along with a Real Time Data Exchange (RTDX) communication link to compare with similar Simulink-based solutions. The proposed testbed GPS signal is created using the NI PXI signal generator and the NI GPS Simulation Toolkit.",
"title": ""
},
{
"docid": "a53fd98780baa0830813543d5e246a63",
"text": "This paper covers a sales forecasting problem on e-commerce sites. To predict product sales, we need to understand customers’ browsing behavior and identify whether it is for purchase purpose or not. For this goal, we propose a new customer model, B2P, of aggregating predictive features extracted from customers’ browsing history. We perform experiments on a real world e-commerce site and show that sales predictions by our model are consistently more accurate than those by existing state-of-the-art baselines.",
"title": ""
},
{
"docid": "c73fb01fcad388bfd01776f58e63ca7f",
"text": "Progress in text understanding has been driven by large datasets that test particular capabilities, like recent datasets for reading comprehension (Hermann et al., 2015). We focus here on the LAMBADA dataset (Paperno et al., 2016), a word prediction task requiring broader context than the immediate sentence. We view LAMBADA as a reading comprehension problem and apply comprehension models based on neural networks. Though these models are constrained to choose a word from the context, they improve the state of the art on LAMBADA from 7.3% to 49%. We analyze 100 instances, finding that neural network readers perform well in cases that involve selecting a name from the context based on dialogue or discourse cues but struggle when coreference resolution or external knowledge is needed.",
"title": ""
}
] | scidocsrr |
51e50a07d050ed6ceb878a88c95cb1bd | Narcissism on Facebook : Self-promotional and anti-social behavior | [
{
"docid": "3ad3cb86e9d7eb1e77406b259294de13",
"text": "The present research examined how narcissism is manifested on a social networking Web site (i.e., Facebook.com). Narcissistic personality self-reports were collected from social networking Web page owners. Then their Web pages were coded for both objective and subjective content features. Finally, strangers viewed the Web pages and rated their impression of the owner on agentic traits, communal traits, and narcissism. Narcissism predicted (a) higher levels of social activity in the online community and (b) more self-promoting content in several aspects of the social networking Web pages. Strangers who viewed the Web pages judged more narcissistic Web page owners to be more narcissistic. Finally, mediational analyses revealed several Web page content features that were influential in raters' narcissistic impressions of the owners, including quantity of social interaction, main photo self-promotion, and main photo attractiveness. Implications of the expression of narcissism in social networking communities are discussed.",
"title": ""
},
{
"docid": "9948738a487ed899ec50ac292e1f9c6d",
"text": "A Web survey of 1,715 college students was conducted to examine Facebook Groups users' gratifications and the relationship between users' gratifications and their political and civic participation offline. A factor analysis revealed four primary needs for participating in groups within Facebook: socializing, entertainment, self-status seeking, and information. These gratifications vary depending on user demographics such as gender, hometown, and year in school. The analysis of the relationship between users' needs and civic and political participation indicated that, as predicted, informational uses were more correlated to civic and political action than to recreational uses.",
"title": ""
}
] | [
{
"docid": "891d4b804e5b2c78ab2f00dbe7adf1e2",
"text": "We show how the Langlands-Kottwitz method can be used to determine the semisimple local factors of the Hasse-Weil zeta-function of certain Shimura varieties. On the way, we prove a conjecture of Haines and Kottwitz in this special case.",
"title": ""
},
{
"docid": "4fb93d604733837782085ecb19b49621",
"text": "Text in many domains involves a significant amount of named entities. Predicting the entity names is often challenging for a language model as they appear less frequent on the training corpus. In this paper, we propose a novel and effective approach to building a discriminative language model which can learn the entity names by leveraging their entity type information. We also introduce two benchmark datasets based on recipes and Java programming codes, on which we evaluate the proposed model. Experimental results show that our model achieves 52.2% better perplexity in recipe generation and 22.06% on code generation than the stateof-the-art language models.",
"title": ""
},
{
"docid": "777cbf7e5c5bdf4457ce24520bbc8036",
"text": "Recently, both industry and academia have proposed many different roadmaps for the future of DRAM. Consequently, there is a growing need for an extensible DRAM simulator, which can be easily modified to judge the merits of today's DRAM standards as well as those of tomorrow. In this paper, we present Ramulator, a fast and cycle-accurate DRAM simulator that is built from the ground up for extensibility. Unlike existing simulators, Ramulator is based on a generalized template for modeling a DRAM system, which is only later infused with the specific details of a DRAM standard. Thanks to such a decoupled and modular design, Ramulator is able to provide out-of-the-box support for a wide array of DRAM standards: DDR3/4, LPDDR3/4, GDDR5, WIO1/2, HBM, as well as some academic proposals (SALP, AL-DRAM, TL-DRAM, RowClone, and SARP). Importantly, Ramulator does not sacrifice simulation speed to gain extensibility: according to our evaluations, Ramulator is 2.5× faster than the next fastest simulator. Ramulator is released under the permissive BSD license.",
"title": ""
},
{
"docid": "45c6d576e6c8e1dbd731126c4fb36b62",
"text": "Marine debris is listed among the major perceived threats to biodiversity, and is cause for particular concern due to its abundance, durability and persistence in the marine environment. An extensive literature search reviewed the current state of knowledge on the effects of marine debris on marine organisms. 340 original publications reported encounters between organisms and marine debris and 693 species. Plastic debris accounted for 92% of encounters between debris and individuals. Numerous direct and indirect consequences were recorded, with the potential for sublethal effects of ingestion an area of considerable uncertainty and concern. Comparison to the IUCN Red List highlighted that at least 17% of species affected by entanglement and ingestion were listed as threatened or near threatened. Hence where marine debris combines with other anthropogenic stressors it may affect populations, trophic interactions and assemblages.",
"title": ""
},
{
"docid": "0aa0c63a4617bf829753df08c5544791",
"text": "The paper discusses the application program interface (API). Most software projects reuse components exposed through APIs. In fact, current-day software development technologies are becoming inseparable from the large APIs they provide. An API is the interface to implemented functionality that developers can access to perform various tasks. APIs support code reuse, provide high-level abstractions that facilitate programming tasks, and help unify the programming experience. A study of obstacles that professional Microsoft developers faced when learning to use APIs uncovered challenges and resulting implications for API users and designers. The article focuses on the obstacles to learning an API. Although learnability is only one dimension of usability, there's a clear relationship between the two, in that difficult-to-use APIs are likely to be difficult to learn as well. Many API usability studies focus on situations where developers are learning to use an API. The author concludes that as APIs keep growing larger, developers will need to learn a proportionally smaller fraction of the whole. In such situations, the way to foster more efficient API learning experiences is to include more sophisticated means for developers to identify the information and the resources they need-even for well-designed and documented APIs.",
"title": ""
},
{
"docid": "3b584918e05d5e7c0c34f3ad846285d3",
"text": "Recently, there is increasing interest and research on the interpretability of machine learning models, for example how they transform and internally represent EEG signals in Brain-Computer Interface (BCI) applications. This can help to understand the limits of the model and how it may be improved, in addition to possibly provide insight about the data itself. Schirrmeister et al. (2017) have recently reported promising results for EEG decoding with deep convolutional neural networks (ConvNets) trained in an end-to-end manner and, with a causal visualization approach, showed that they learn to use spectral amplitude changes in the input. In this study, we investigate how ConvNets represent spectral features through the sequence of intermediate stages of the network. We show higher sensitivity to EEG phase features at earlier stages and higher sensitivity to EEG amplitude features at later stages. Intriguingly, we observed a specialization of individual stages of the network to the classical EEG frequency bands alpha, beta, and high gamma. Furthermore, we find first evidence that particularly in the last convolutional layer, the network learns to detect more complex oscillatory patterns beyond spectral phase and amplitude, reminiscent of the representation of complex visual features in later layers of ConvNets in computer vision tasks. Our findings thus provide insights into how ConvNets hierarchically represent spectral EEG features in their intermediate layers and suggest that ConvNets can exploit and might help to better understand the compositional structure of EEG time series.",
"title": ""
},
{
"docid": "a0a73cc2b884828eb97ff8045bfe50a6",
"text": "A variety of antennas have been engineered with metamaterials (MTMs) and metamaterial-inspired constructs to improve their performance characteristics. Examples include electrically small, near-field resonant parasitic (NFRP) antennas that require no matching network and have high radiation efficiencies. Experimental verification of their predicted behaviors has been obtained. Recent developments with this NFRP electrically small paradigm will be reviewed. They include considerations of increased bandwidths, as well as multiband and multifunctional extensions.",
"title": ""
},
{
"docid": "11c6c2a539b08fb13f1e7ffad7726e50",
"text": "Virtual and augmented reality are becoming the new medium that transcend the way we interact with virtual content, paving the way for many immersive and interactive forms of applications. The main purpose of my research is to create a seamless combination of physiological sensing with virtual reality to provide users with a new layer of input modality or as a form of implicit feedback. To achieve this, my research focuses in novel augmented reality (AR) and virtual reality (VR) based application for a multi-user, multi-view, multi-modal system augmented by physiological sensing methods towards an increased public and social acceptance.",
"title": ""
},
{
"docid": "44cf5669d05a759ab21b3ebc1f6c340d",
"text": "Linear variable differential transformer (LVDT) sensors are widely used in hydraulic and pneumatic mechatronic systems for measuring physical quantities like displacement, force or pressure. The LVDT sensor consists of two magnetic coupled coils with a common core and this sensor converts the displacement of core into reluctance variation of magnetic circuit. LVDT sensors combines good accuracy (0.1 % error) with low cost, but they require relative complex electronics. Standard electronics for LVDT sensor conditioning is analog $the coupled coils constitute an inductive half-bridge supplied with 5 kHz sinus excitation from a quadrate oscillator. The output phase span is amplified and synchronous demodulated. This analog technology works well but has its drawbacks - hard to adjust, many components and packages, no connection to computer systems. To eliminate all these disadvantages, our team from \"Politehnica\" University of Bucharest has developed a LVDT signal conditioner using system on chip microcontroller MSP430F149 from Texas Instruments. This device integrates all peripherals required for LVDT signal conditioning (pulse width modulation modules, analog to digital converter, timers, enough memory resources and processing power) and offers also excellent low-power options. Resulting electronic module is a one-chip solution made entirely in SMD technology and its small dimensions allow its integration into sensor's body. Present paper focuses on specific issues of this digital solution for LVDT conditioning and compares it with classic analog solution from different points of view: error curve, power consumption, communication options, dimensions and production cost. Microcontroller software (firmware) and digital signal conditioning techniques for LVDT are also analyzed. Use of system on chip devices for signal conditioning allows realization of low cost compact transducers with same or better performances than their analog counterparts, but with extra options like serial communication channels, self-calibration, local storage of measured values and fault detection",
"title": ""
},
{
"docid": "8a634e7bf127f2a90227c7502df58af0",
"text": "A convex channel surface with Si0.8Ge0.2 is proposed to enhance the retention time of a capacitorless DRAM Generation 2 type of capacitorless DRAM cell. This structure provides a physical well together with an electrostatic barrier to more effectively store holes and thereby achieve larger sensing margin as well as retention time. The advantages of this new cell design as compared with the planar cell design are assessed via twodimensional device simulations. The results indicate that the convex heterojunction channel design is very promising for future capacitorless DRAM. Keywords-Capacitorless DRAM; Retention Time; Convex Channel; Silicon Germanium;",
"title": ""
},
{
"docid": "87f2050c8e3f49d5ff5bdb329bfafcf9",
"text": "To test whether mental activities collected from non-REM sleep are influenced by REM sleep, we suppressed REM sleep using clomipramine 50mg (an antidepressant) or placebo in the evening, in a double blind cross-over design, in 11 healthy young men. Subjects were awakened every hour and asked about their mental activity. The marked (81%, range 39-98%) REM-sleep suppression induced by clomipramine did not substantially affect any aspects of dream recall (report length, complexity, bizarreness, pleasantness and self-perception of dream or thought-like mentation). Since long, complex and bizarre dreams persist even after suppressing REM sleep either partially or totally, it suggests that the generation of mental activity during sleep is independent of sleep stage.",
"title": ""
},
{
"docid": "6fd79de4d6c78245a7a50fa6608d12ab",
"text": "Data-dependent hashing has recently attracted attention due to being able to support efficient retrieval and storage of high-dimensional data, such as documents, images, and videos. In this paper, we propose a novel learning-based hashing method called “supervised discrete hashing with relaxation” (SDHR) based on “supervised discrete hashing” (SDH). SDH uses ordinary least squares regression and traditional zero-one matrix encoding of class label information as the regression target (code words), thus fixing the regression target. In SDHR, the regression target is instead optimized. The optimized regression target matrix satisfies a large margin constraint for correct classification of each example. Compared with SDH, which uses the traditional zero-one matrix, SDHR utilizes the learned regression target matrix and, therefore, more accurately measures the classification error of the regression model and is more flexible. As expected, SDHR generally outperforms SDH. Experimental results on two large-scale image data sets (CIFAR-10 and MNIST) and a large-scale and challenging face data set (FRGC) demonstrate the effectiveness and efficiency of SDHR.",
"title": ""
},
{
"docid": "b8fdea273f4b22f564e2d961154d4d8d",
"text": "While the study of the physiochemical composition and structure of the interstitium on a molecular level is a large and important field in itself, the present review centered mainly on the functional consequences for the control of extracellular fluid volume. As pointed out in section I, a biological monitoring system for the total extracellular volume seems very unlikely because a major part of that volume is made up of multiple, separate, and functionally heterogeneous interstitial compartments. Even less likely is a selective volume control of each of these compartments by the nervous system. Instead, as shown by many studies cited in this review, a local autoregulation of interstitial volume is provided by automatic adjustment of the transcapillary Starling forces and lymph flow. Local vascular control of capillary pressure and surface area, of special importance in orthostasis, has been discussed in several recent reviews and was mentioned only briefly in this article. The gel-like consistency of the interstitium is attributed to glycosaminoglycans, in soft connective tissues mainly hyaluronan. However, the concept of a gel phase and a free fluid phase now seems to be replaced by the quantitatively more well-defined distribution spaces for glycosaminoglycans and plasma protein, apparently in osmotic equilibrium with each other. The protein-excluded space, determined mainly by the content of glycosaminoglycans and collagen, has been measured in vivo in many tissues, and the effect of exclusion on the oncotic buffering has been clarified. The effect of protein charge on its excluded volume and on interstitial hydraulic conductivity has been studied only in lungs and is only partly clarified. Of unknown functional importance is also the recent finding of a free interstitial hyaluronan pool with relatively rapid removal by lymph. The postulated preferential channels from capillaries to lymphatics have received little direct support. Thus the variation of plasma-to-lymph passage times for proteins may probably be ascribed to heterogeneity with respect to path length, linear velocity, and distribution volumes. Techniques for measuring interstitial fluid pressure have been refined and reevaluated, approaching some concensus on slightly negative control pressures in soft connective tissues (0 to -4 mmHg), zero, or slightly positive pressure in other tissues. Interstitial pressure-volume curves have been recorded in several tissues, and progress has been made in clarifying the dependency of interstitial compliance on glycosaminoglycan-osmotic pressure, collagen, and microfibrils.(ABSTRACT TRUNCATED AT 400 WORDS)",
"title": ""
},
{
"docid": "7e57c7abcd4bcb79d5f0fe8b6cd9a836",
"text": "Among the many viruses that are known to infect the human liver, hepatitis B virus (HBV) and hepatitis C virus (HCV) are unique because of their prodigious capacity to cause persistent infection, cirrhosis, and liver cancer. HBV and HCV are noncytopathic viruses and, thus, immunologically mediated events play an important role in the pathogenesis and outcome of these infections. The adaptive immune response mediates virtually all of the liver disease associated with viral hepatitis. However, it is becoming increasingly clear that antigen-nonspecific inflammatory cells exacerbate cytotoxic T lymphocyte (CTL)-induced immunopathology and that platelets enhance the accumulation of CTLs in the liver. Chronic hepatitis is characterized by an inefficient T cell response unable to completely clear HBV or HCV from the liver, which consequently sustains continuous cycles of low-level cell destruction. Over long periods of time, recurrent immune-mediated liver damage contributes to the development of cirrhosis and hepatocellular carcinoma.",
"title": ""
},
{
"docid": "747e46fc4621604d6f551d909cbdf42b",
"text": "Computational creativity is an emerging branch of artificial intelligence that places computers in the center of the creative process. This demonstration shows a computational system that creates flavorful, novel, and perhaps healthy culinary recipes by drawing on big data techniques. It brings analytics algorithms together with disparate data sources from culinary science, chemistry, and hedonic psychophysics.\n In its most powerful manifestation, the system operates through a mixed-initiative approach to human-computer interaction via turns between human and computer. In particular, the sequential creation process is modeled after stages in human cognitive processes of creativity.\n The end result is an ingredient list, ingredient proportions, as well as a directed acyclic graph representing a partial ordering of culinary recipe steps.",
"title": ""
},
{
"docid": "e6cae5bec5bb4b82794caca85d3412a2",
"text": "Detection of abusive language in user generated online content has become an issue of increasing importance in recent years. Most current commercial methods make use of blacklists and regular expressions, however these measures fall short when contending with more subtle, less ham-fisted examples of hate speech. In this work, we develop a machine learning based method to detect hate speech on online user comments from two domains which outperforms a state-ofthe-art deep learning approach. We also develop a corpus of user comments annotated for abusive language, the first of its kind. Finally, we use our detection tool to analyze abusive language over time and in different settings to further enhance our knowledge of this behavior.",
"title": ""
},
{
"docid": "b5b5d6c5768e40a343b672a33f9c3f0c",
"text": "In this paper we describe Icarus, a cognitive architecture for physical agents that integrates ideas from a number of traditions, but that has been especially influenced by results from cognitive psychology. We review Icarus’ commitments to memories and representations, then present its basic processes for performance and learning. We illustrate the architecture’s behavior on a task from in-city driving that requires interaction among its various components. In addition, we discuss Icarus’ consistency with qualitative findings about the nature of human cognition. In closing, we consider the framework’s relation to other cognitive architectures that have been proposed in the literature. Introduction and Motivation A cognitive architecture (Newell, 1990) specifies the infrastructure for an intelligent system that remains constant across different domains and knowledge bases. This infrastructure includes a commitment to formalisms for representing knowledge, memories for storing this domain content, and processes that utilize and acquire the knowledge. Research on cognitive architectures has been closely tied to cognitive modeling, in that they often attempt to explain a wide range of human behavior and, at the very least, desire to support the same broad capabilities as human intelligence. In this paper we describe Icarus, a cognitive architecture that builds on previous work in this area but also has some novel features. Our aim is not to match quantitative data, but rather to reproduce qualitative characteristics of human behavior, and our discussion will focus on such issues. The best method for evaluating a cognitive architecture remains an open question, but it is clear that this should happen at the systems level rather than in terms of isolated phenomena. We will not claim that Icarus accounts for any one result better than other candidates, but we will argue that it models facets of the human cognitive architecture, and the ways they fit together, that have been downplayed by other researchers in this area. Copyright c © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. A conventional paper on cognitive architectures would first describe the memories and their contents, then discuss the mechanisms that operate over them. However, Icarus’ processes interact with certain memories but not others, suggesting that we organize the text around these processes and the memories on which they depend. Moreover, some modules build on other components, which suggests a natural progression. Therefore, we first discuss Icarus’ most basic mechanism, conceptual inference, along with the memories it inspects and alters. After this, we present the processes for goal selection and skill execution, which operate over the results of inference. Finally, we consider the architecture’s module for problem solving, which builds on both inference and execution, and its associated learning processes, which operate over the results of problem solving. In each case, we discuss the framework’s connection to qualitative results from cognitive psychology. In addition, we illustrate the ideas with examples from the domain of in-city driving, which has played a central role in our research. Briefly, this involves controlling a vehicle in a simulated urban environment with buildings, road segments, street intersections, and other vehicles. This domain, which Langley and Choi (2006) describe at more length, provides a rich setting to study the interplay among different facets of cognition. Beliefs, Concepts, and Inference In order to carry out actions that achieve its goals, an agent must understand its current situation. Icarus includes a module for conceptual inference that is responsible for this cognitive task which operates by matching conceptual structures against percepts and beliefs. This process depends on the contents and representation of elements in short-term and long-term memory. Because Icarus is designed to support intelligent agents that operate in some external environment, it requires information about the state of its surroundings. To this end, it incorporates a perceptual buffer that describes aspects of the environment the agent perceives directly on a given cycle, after which it is updated. Each element or percept in this ephemeral memory corresponds to a particular object and specifies the object’s type, a unique name, and a set of attribute-value pairs that characterize the object on the current time step. Although one could create a stimulus-response agent that operates directly off perceptual information, its behavior would not reflect what we normally mean by the term ‘intelligent’, which requires higher-level cognition. Thus, Icarus also includes a belief memory that contains higher-level inferences about the agent’s situation. Whereas percepts describe attributes of specific objects, beliefs describe relations among objects, such as the relative positions of two buildings. Each element in this belief memory consists of a predicate and a set of symbolic arguments, each of which refers to some object, typically one that appears in the perceptual buffer. Icarus beliefs are instances of generalized concepts that reside in conceptual memory , which contains longterm structures that describe classes of environmental situations. The formalism that expresses these logical concepts is similar to that for Prolog clauses. Like beliefs, Icarus concepts are inherently symbolic and relational structures. Each clause in conceptual memory includes a head that gives the concept’s name and arguments, along with a body that states the conditions under which the clause should match against the contents of short-term memories. The architecture’s most basic activity is conceptual inference. On each cycle, the environmental simulator returns a set of perceived objects, including their types, names, and descriptions in the format described earlier. Icarus deposits this set of elements in the perceptual buffer, where they initiate matching against long-term conceptual definitions. The overall effect is that the system adds to its belief memory all elements that are implied deductively by these percepts and concept definitions. Icarus repeats this process on every cycle, so it constantly updates its beliefs about the environment. The inference module operates in a bottom-up, datadriven manner that starts from descriptions of perceived objects. The architecture matches these percepts against the bodies of primitive concept clauses and adds any supported beliefs (i.e., concept instances) to belief memory. These trigger matching against higher-level concept clauses, which in turn produces additional beliefs. The process continues until Icarus has added to memory all beliefs it can infer in this manner. Although this mechanism reasons over structures similar to Prolog clauses, its operation is closer to the elaboration process in the Soar architecture (Laird et al., 1987). For example, for the in-city driving domain, we provided Icarus with 41 conceptual clauses. On each cycle, the simulator deposits a variety of elements in the perceptual buffer, including percepts for the agent itself (self ), street segments (e.g., segment2), lane lines (e.g., line1), buildings, and other entities. Based on attributes of the object self and one of the segments, the architecture derives the primitive concept instance (in-segment self segment2). Similarly, from self and the object line1, it infers the belief (in-lane self line1). These two elements lead Icarus to deduce two nonprimitive beliefs, (centered-in-lane self segment2 line1) and (aligned-with-lane-in-segment self segment2 line1). Finally, from these two instances and another belief, (steering-wheel-straight self), the system draws an even higher-level inference, (driving-well-in-segment self segment2 line1). Other beliefs that encode relations among perceived entities also follow from the inference process. Icarus’ conceptual inference module incorporates a number of key ideas from the psychological literature: • Concepts are distinct cognitive entities that humans use to describe their environment and goals; moreover, they support both categorization and inference; • The great majority of human categories are grounded in perception, making reference to physical characteristics of objects they describe (Barsalou, 1999); • Many human concepts are relational in nature, in that they describe connections or interactions among objects or events (Kotovsky & Gentner, 1996); • Concepts are organized in a hierarchical manner, with complex categories being defined in terms of simpler structures. Icarus reflects each of these claims at the architectural level, which contrasts with most other architectures’ treatment of concepts and categorization. However, we will not claim our treatment is complete. Icarus currently models concepts as Boolean structures that match in an all-or-none manner, whereas human categories have a graded character (Rosch & Mervis, 1975). Also, retrieval occurs in a purely bottomup fashion, whereas human categorization and inference exhibits top-down priming effects. Both constitute important directions for extending the framework. Goals, Skills, and Execution We have seen that Icarus can utilize its conceptual knowledge to infer and update beliefs about its surroundings, but an intelligent agent must also take action in the environment. To this end, the architecture includes additional memories that concern goals the agent wants to achieve, skills the agent can execute to reach them, and intentions about which skills to pursue. These are linked by a performance mechanism that executes stored skills, thus changing the environment and, hopefully, taking the agent closer to its goals. In particular, Icarus incorporates a goal memory that contains the agent’s top-level objectives. A goal is some concept instance that the agent wants to satisfy. T",
"title": ""
},
{
"docid": "ed9beb7f6ffc65439f34294dec11a966",
"text": "CONTEXT\nA variety of ankle self-stretching exercises have been recommended to improve ankle-dorsiflexion range of motion (DFROM) in individuals with limited ankle dorsiflexion. A strap can be applied to stabilize the talus and facilitate anterior glide of the distal tibia at the talocrural joint during ankle self-stretching exercises. Novel ankle self-stretching using a strap (SSS) may be a useful method of improving ankle DFROM.\n\n\nOBJECTIVE\nTo compare the effects of 2 ankle-stretching techniques (static stretching versus SSS) on ankle DFROM.\n\n\nDESIGN\nRandomized controlled clinical trial.\n\n\nSETTING\nUniversity research laboratory.\n\n\nPATIENTS OR OTHER PARTICIPANTS\nThirty-two participants with limited active dorsiflexion (<20°) while sitting (14 women and 18 men) were recruited.\n\n\nMAIN OUTCOME MEASURE(S)\nThe participants performed 2 ankle self-stretching techniques (static stretching and SSS) for 3 weeks. Active DFROM (ADFROM), passive DFROM (PDFROM), and the lunge angle were measured. An independent t test was used to compare the improvements in these values before and after the 2 stretching interventions. The level of statistical significance was set at α = .05.\n\n\nRESULTS\nActive DFROM and PDFROM were greater in both stretching groups after the 3-week interventions. However, ADFROM, PDFROM, and the lunge angle were greater in the SSS group than in the static-stretching group (P < .05).\n\n\nCONCLUSIONS\nAnkle SSS is recommended to improve ADFROM, PDFROM, and the lunge angle in individuals with limited DFROM.",
"title": ""
},
{
"docid": "d80fc668073878c476bdf3997b108978",
"text": "Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle datastream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results. Keywords—Android, automotive, data stream management system",
"title": ""
}
] | scidocsrr |
700f4f089e4bd53e8c2bcf3e9f6b8e3a | Digital Social Norm Enforcement: Online Firestorms in Social Media | [
{
"docid": "01b9bf49c88ae37de79b91edeae20437",
"text": "While online, some people self-disclose or act out more frequently or intensely than they would in person. This article explores six factors that interact with each other in creating this online disinhibition effect: dissociative anonymity, invisibility, asynchronicity, solipsistic introjection, dissociative imagination, and minimization of authority. Personality variables also will influence the extent of this disinhibition. Rather than thinking of disinhibition as the revealing of an underlying \"true self,\" we can conceptualize it as a shift to a constellation within self-structure, involving clusters of affect and cognition that differ from the in-person constellation.",
"title": ""
},
{
"docid": "6d52a9877ddf18eb7e43c83000ed4da1",
"text": "Cyberbullying has recently emerged as a new form of bullying and harassment. 360 adolescents (12-20 years), were surveyed to examine the nature and extent of cyberbullying in Swedish schools. Four categories of cyberbullying (by text message, email, phone call and picture/video clip) were examined in relation to age and gender, perceived impact, telling others, and perception of adults becoming aware of such bullying. There was a significant incidence of cyberbullying in lower secondary schools, less in sixth-form colleges. Gender differences were few. The impact of cyberbullying was perceived as highly negative for picture/video clip bullying. Cybervictims most often chose to either tell their friends or no one at all about the cyberbullying, so adults may not be aware of cyberbullying, and (apart from picture/video clip bullying) this is how it was perceived by pupils. Findings are discussed in relation to similarities and differences between cyberbullying and the more traditional forms of bullying.",
"title": ""
}
] | [
{
"docid": "6059cfa690c2de0a8c883aa741000f3a",
"text": "We study how a viewer can control a television set remotely by hand gestures. We address two fundamental issues of gesture{based human{computer interaction: (1) How can one communicate a rich set of commands without extensive user training and memorization of gestures? (2) How can the computer recognize the commands in a complicated visual environment? Our solution to these problems exploits the visual feedback of the television display. The user uses only one gesture: the open hand, facing the camera. He controls the television by moving his hand. On the display, a hand icon appears which follows the user's hand. The user can then move his own hand to adjust various graphical controls with the hand icon. The open hand presents a characteristic image which the computer can detect and track. We perform a normalized correlation of a template hand to the image to analyze the user's hand. A local orientation representation is used to achieve some robustness to lighting variations. We made a prototype of this system using a computer workstation and a television. The graphical overlays appear on the computer screen, although they could be mixed with the video to appear on the television. The computer controls the television set through serial port commands to an electronically controlled remote control. We describe knowledge we gained from building the prototype.",
"title": ""
},
{
"docid": "a83b6602e0d4a45e3bad60967890c46a",
"text": "In the present work, we tackle the issue of designing, prototyping and testing a general-purpose automated level editor for platform video games. Beside relieving level designers from the burden of repetitive work, Procedural Content Generation can be exploited for optimizing the development process, increasing re-playability, adapting games to specific audiences, and enabling new games mechanics. The tool proposed in this paper is aimed at producing levels that are both playable and fun. At the same time, it should guarantee maximum freedom to the level designer, and suggest corrections functional to the quality of the player experience.",
"title": ""
},
{
"docid": "ba3522be00805402629b4fb4a2c21cc4",
"text": "Successful electronic government requires the successful implementation of technology. This book lays out a framework for understanding a system of decision processes that have been shown to be associated with the successful use of technology. Peter Weill and Jeanne Ross are based at the Center for Information Systems Research at MIT’s Sloan School of Management, which has been doing research on the management of information technology since 1974. Understanding how to make decisions about information technology has been a primary focus of the Center for decades. Weill and Ross’ book is based on two primary studies and a number of related projects. The more recent study is a survey of 256 organizations from the Americas, Europe, and Asia Pacific that was led by Peter Weill between 2001 and 2003. This work also included 32 case studies. The second study is a set of 40 case studies developed by Jeanne Ross between 1999 and 2003 that focused on the relationship between information technology (IT) architecture and business strategy. This work identified governance issues associated with IT and organizational change efforts. Three other projects undertaken by Weill, Ross, and others between 1998 and 2001 also contributed to the material described in the book. Most of this work is available through the CISR Web site, http://mitsloan.mit.edu/cisr/rmain.php. Taken together, these studies represent a substantial body of work on which to base the development of a frameBOOK REVIEW",
"title": ""
},
{
"docid": "490785e55545eda74f3275a0a8b5da73",
"text": "This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent correct recognition rate (CRR) and perfect receiver-operating characteristic (ROC) curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the false acceptance rate (FAR) and false rejection rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical equal error rate (EER) is predicted to be as low as 2.59 times 10-1 available data sets",
"title": ""
},
{
"docid": "075b05396818b13eff77fdcf46053fa7",
"text": "Link (association) analysis has been used in the criminal justice domain to search large datasets for associations between crime entities in order to facilitate crime investigations. However, link analysis still faces many challenging problems, such as information overload, high search complexity, and heavy reliance on domain knowledge. To address these challenges, this article proposes several techniques for automated, effective, and efficient link analysis. These techniques include the co-occurrence analysis, the shortest path algorithm, and a heuristic approach to identifying associations and determining their importance. We developed a prototype system called CrimeLink Explorer based on the proposed techniques. Results of a user study with 10 crime investigators from the Tucson Police Department showed that our system could help subjects conduct link analysis more efficiently than traditional single-level link analysis tools. Moreover, subjects believed that association paths found based on the heuristic approach were more accurate than those found based solely on the co-occurrence analysis and that the automated link analysis system would be of great help in crime investigations.",
"title": ""
},
{
"docid": "a9709367bc84ececd98f65ed7359f6b0",
"text": "Though many tools are available to help programmers working on change tasks, and several studies have been conducted to understand how programmers comprehend systems, little is known about the specific kinds of questions programmers ask when evolving a code base. To fill this gap we conducted two qualitative studies of programmers performing change tasks to medium to large sized programs. One study involved newcomers working on assigned change tasks to a medium-sized code base. The other study involved industrial programmers working on their own change tasks on code with which they had experience. The focus of our analysis has been on what information a programmer needs to know about a code base while performing a change task and also on howthey go about discovering that information. Based on this analysis we catalog and categorize 44 different kinds of questions asked by our participants. We also describe important context for how those questions were answered by our participants, including their use of tools.",
"title": ""
},
{
"docid": "321049dbe0d9bae5545de3d8d7048e01",
"text": "ShopTalk, a proof-of-concept system designed to assist individuals with visual impairments with finding shelved products in grocery stores, is built on the assumption that simple verbal route directions and layout descriptions can be used to leverage the O&M skills of independent visually impaired travelers to enable them to navigate the store and retrieve shelved products. This paper introduces ShopTalk and summarizes experiments performed in a real-world supermarket.",
"title": ""
},
{
"docid": "df610551aec503acd1a31fb519fdeabe",
"text": "A small form factor, 79 GHz, MIMO radar sensor with 2D angle of arrival estimation capabilities was designed for automotive applications. It offers a 0.05 m distance resolution required to make small minimum distance measurements. The radar dimensions are 42×44×20 mm3 enabling installation in novel side locations. This aspect, combined with a wide field of view, creates a coverage that compliments the near range coverage gaps of existing long and medium range radars. Therefore, this radar supports novel radar applications such as parking aid and can be used to create a 360 degrees safety cocoon around the car.",
"title": ""
},
{
"docid": "f331cb6d4b970829100bfe103a8d8762",
"text": "This paper presents lessons learned from an experiment to reverse engineer a program. A reverse engineering process was used as part of a project to develop an Ada implementation of a Fortran program and upgrade the existing documentation. To accomplish this, design information was extracted from the Fortran source code and entered into a software development environment. The extracted design information was used to implement a new version of the program written in Ada. This experiment revealed issues about recovering design information, such as, separating design details from implementation details, dealing with incomplete or erroneous information, traceability of information between implementation and recovered design, and re-engineering. The reverse engineering process used to recover the design, and the experience gained during the study are reported.",
"title": ""
},
{
"docid": "477769b83e70f1d46062518b1d692664",
"text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.",
"title": ""
},
{
"docid": "61eb4d0961242bd1d1e59d889a84f89d",
"text": "Understanding and forecasting the health of an online community is of great value to its owners and managers who have vested interests in its longevity and success. Nevertheless, the association between community evolution and the behavioural patterns and trends of its members is not clearly understood, which hinders our ability of making accurate predictions of whether a community is flourishing or diminishing. In this paper we use statistical analysis, combined with a semantic model and rules for representing and computing behaviour in online communities. We apply this model on a number of forum communities from Boards.ie to categorise behaviour of community members over time, and report on how different behaviour compositions correlate with positive and negative community growth in these forums.",
"title": ""
},
{
"docid": "1162833be969a71b3d9b837d7e6f4464",
"text": "RaineR WaseR1,2* and Masakazu aono3,4 1Institut für Werkstoffe der Elektrotechnik 2, RWTH Aachen University, 52056 Aachen, Germany 2Institut für Festkörperforschung/CNI—Center of Nanoelectronics for Information Technology, Forschungszentrum Jülich, 52425 Jülich, Germany 3Nanomaterials Laboratories, National Institute for Material Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan 4ICORP/Japan Science and Technology Agency, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012, Japan *e-mail: r.waser@fz-juelich.de",
"title": ""
},
{
"docid": "1dcfd9b82cddb3111df067497febdd8b",
"text": "Studies investigating the prevalence of psychiatric disorders among trans individuals have identified elevated rates of psychopathology. Research has also provided conflicting psychiatric outcomes following gender-confirming medical interventions. This review identifies 38 cross-sectional and longitudinal studies describing prevalence rates of psychiatric disorders and psychiatric outcomes, pre- and post-gender-confirming medical interventions, for people with gender dysphoria. It indicates that, although the levels of psychopathology and psychiatric disorders in trans people attending services at the time of assessment are higher than in the cis population, they do improve following gender-confirming medical intervention, in many cases reaching normative values. The main Axis I psychiatric disorders were found to be depression and anxiety disorder. Other major psychiatric disorders, such as schizophrenia and bipolar disorder, were rare and were no more prevalent than in the general population. There was conflicting evidence regarding gender differences: some studies found higher psychopathology in trans women, while others found no differences between gender groups. Although many studies were methodologically weak, and included people at different stages of transition within the same cohort of patients, overall this review indicates that trans people attending transgender health-care services appear to have a higher risk of psychiatric morbidity (that improves following treatment), and thus confirms the vulnerability of this population.",
"title": ""
},
{
"docid": "b54045769ce80654400706a2489a2968",
"text": "This study aims to develop a methodology for predicting cycle time based on domain knowledge and data mining algorithms given production status including WIP, throughput. The proposed model and derived rules were validated with real data and demonstrated its practical viability for supporting production planning decisions",
"title": ""
},
{
"docid": "70745e8cdf957b1388ab38a485e98e60",
"text": "Network studies of large-scale brain connectivity have begun to reveal attributes that promote the segregation and integration of neural information: communities and hubs. Network communities are sets of regions that are strongly interconnected among each other while connections between members of different communities are less dense. The clustered connectivity of network communities supports functional segregation and specialization. Network hubs link communities to one another and ensure efficient communication and information integration. This review surveys a number of recent reports on network communities and hubs, and their role in integrative processes. An emerging focus is the shifting balance between segregation and integration over time, which manifest in continuously changing patterns of functional interactions between regions, circuits and systems.",
"title": ""
},
{
"docid": "b6d8e6b610eff993dfa93f606623e31d",
"text": "Data journalism designates journalistic work inspired by digital data sources. A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community and referred the process of verifying and ensuring the accuracy of published media content; since 2012, however, it has increasingly focused on the analysis of politics, economy, science, and news content shared in any form, but first and foremost on the Web (social and otherwise). These trends have been noticed by computer scientists working in the industry and academia. Thus, a very lively area of digital content management research has taken up these problems and works to propose foundations (models), algorithms, and implement them through concrete tools. Our tutorial: (i) Outlines the current state of affairs in the area of digital (or computational) fact-checking in newsrooms, by journalists, NGO workers, scientists and IT companies; (ii) Shows which areas of digital content management research, in particular those relying on the Web, can be leveraged to help fact-checking, and gives a comprehensive survey of efforts in this area; (iii) Highlights ongoing trends, unsolved problems, and areas where we envision future scientific and practical advances. PVLDB Reference Format: S. Cazalens, J. Leblay, P. Lamarre, I. Manolescu, X. Tannier. Computational Fact Checking: A Content Management Perspective. PVLDB, 11 (12): 2110-2113, 2018. DOI: https://doi.org/10.14778/3229863.3229880 This work is licensed under the Creative Commons AttributionNonCommercial-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For any use beyond those covered by this license, obtain permission by emailing info@vldb.org. Proceedings of the VLDB Endowment, Vol. 11, No. 12 Copyright 2018 VLDB Endowment 2150-8097/18/8. DOI: https://doi.org/10.14778/3229863.3229880 1. OUTLINE In Section 1.1, we provide a short history of journalistic fact-checking and presents its most recent and visible actors, from the media and/or NGO communities. Section 1.2 discusses the scientific content management areas which bring useful tools for computational fact-checking. 1.1 Data journalism and fact-checking While data of some form is a natural ingredient of all reporting, the increasing volumes and complexity of digital data lead to a qualitative jump, where technical skills, and in particular data science skills, are stringently needed in journalistic work. A particularly popular and active area of data journalism is concerned with fact-checking. The term was born in the journalist community; it referred to the task of identifying and checking factual claims present in media content, which dedicated newsroom personnel would then check for factual accuracy. The goal of such checking was to avoid misinformation, to protect the journal reputation and avoid legal actions. Starting around 2012, first in the United States (FactCheck.org), then in Europe, and soon after in all areas of the world, journalists have started to take advantage of modern technologies for processing content, such as text, video, structured and unstructured data, in order to automate, at least partially, the knowledge finding, reasoning, and analysis tasks which had been previously performed completely by humans. Over time, the focus of fact-checking shifted from verifying claims made by media outlets, toward the claims made by politicians and other public figures. This trend coincided with the parallel (but distinct) evolution toward asking Government Open Data, that is: the idea that governing bodies should share with the public precise information describing their functioning, so that the people have means to assess the quality of their elected representation. Government Open Data became quickly available, in large volumes, e.g. through data.gov in the US, data.gov.uk in the UK, data.gouv.fr in France etc.; journalists turned out to be the missing link between the newly available data and comprehension by the public. Data journalism thus found http://factcheck.org",
"title": ""
},
{
"docid": "0ae5df7af64f0069d691922d391f3c60",
"text": "With the realization that more research is needed to explore external factors (e.g., pedagogy, parental involvement in the context of K-12 learning) and internal factors (e.g., prior knowledge, motivation) underlying student-centered mobile learning, the present study conceptually and empirically explores how the theories and methodologies of self-regulated learning (SRL) can help us analyze and understand the processes of mobile learning. The empirical data collected from two elementary science classes in Singapore indicates that the analytical SRL model of mobile learning proposed in this study can illuminate the relationships between three aspects of mobile learning: students’ self-reports of psychological processes, patterns of online learning behavior in the mobile learning environment (MLE), and learning achievement. Statistical analyses produce three main findings. First, student motivation in this case can account for whether and to what degree the students can actively engage in mobile learning activities metacognitively, motivationally, and behaviorally. Second, the effect of students’ self-reported motivation on their learning achievement is mediated by their behavioral engagement in a pre-designed activity in the MLE. Third, students’ perception of parental autonomy support is not only associated with their motivation in school learning, but also associated with their actual behaviors in self-regulating their learning. ! 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a478b6f7accfb227e6ee5a6b35cd7fa1",
"text": "This paper presents the development of an ultra-high-speed permanent magnet synchronous motor (PMSM) that produces output shaft power of 2000 W at 200 000 rpm with around 90% efficiency. Due to the guaranteed open-loop stability over the full operating speed range, the developed motor system is compact and low cost since it can avoid the design complexity of a closed-loop controller. This paper introduces the collaborative design approach of the motor system in order to ensure both performance requirements and stability over the full operating speed range. The actual implementation of the motor system is then discussed. Finally, computer simulation and experimental results are provided to validate the proposed design and its effectiveness",
"title": ""
},
{
"docid": "8375f143ff6b42e36e615a78a362304b",
"text": "The Ball and Beam system is a popular technique for the study of control systems. The system has highly non-linear characteristics and is an excellent tool to represent an unstable system. The control of such a system presents a challenging task. The ball and beam mirrors the real time unstable complex systems such as flight control, on a small laboratory level and provides for developing control algorithms which can be implemented at a higher scale. The objective of this paper is to design and implement cascade PD control of the ball and beam system in LabVIEW using data acquisition board and DAQmx and use the designed control circuit to verify results in real time.",
"title": ""
},
{
"docid": "bbbbe3f926de28d04328f1de9bf39d1a",
"text": "The detection of fraudulent financial statements (FFS) is an important and challenging issue that has served as the impetus for many academic studies over the past three decades. Although nonfinancial ratios are generally acknowledged as the key factor contributing to the FFS of a corporation, they are usually excluded from early detection models. The objective of this study is to increase the accuracy of FFS detection by integrating the rough set theory (RST) and support vector machines (SVM) approaches, while adopting both financial and nonfinancial ratios as predictive variables. The results showed that the proposed hybrid approach (RSTþSVM) has the best classification rate as well as the lowest occurrence of Types I and II errors, and that nonfinancial ratios are indeed valuable information in FFS detection.",
"title": ""
}
] | scidocsrr |
41e986533505e706430352f2ed053401 | Multiple Kernel Learning for Hyperspectral Image Classification: A Review | [
{
"docid": "cc8adbaf01e3ab61546fd875724ac270",
"text": "This paper presents the image information mining based on a communication channel concept. The feature extraction algorithms encode the image, while an analysis of topic discovery will decode and send its content to the user in the shape of a semantic map. We consider this approach for a real meaning based semantic annotation of very high resolution remote sensing images. The scene content is described using a multi-level hierarchical information representation. Feature hierarchies are discovered considering that higher levels are formed by combining features from lower level. Such a level to level mapping defines our methodology as a deep learning process. The whole analysis can be divided in two major learning steps. The first one regards the Bayesian inference to extract objects and assign basic semantic to the image. The second step models the spatial interactions between the scene objects based on Latent Dirichlet Allocation, performing a high level semantic annotation. We used a WorldView2 image to exemplify the processing results.",
"title": ""
},
{
"docid": "2af5e18cfb6dadd4d5145a1fa63f0536",
"text": "Hyperspectral remote sensing technology has advanced significantly in the past two decades. Current sensors onboard airborne and spaceborne platforms cover large areas of the Earth surface with unprecedented spectral, spatial, and temporal resolutions. These characteristics enable a myriad of applications requiring fine identification of materials or estimation of physical parameters. Very often, these applications rely on sophisticated and complex data analysis methods. The sources of difficulties are, namely, the high dimensionality and size of the hyperspectral data, the spectral mixing (linear and nonlinear), and the degradation mechanisms associated to the measurement process such as noise and atmospheric effects. This paper presents a tutorial/overview cross section of some relevant hyperspectral data analysis methods and algorithms, organized in six main topics: data fusion, unmixing, classification, target detection, physical parameter retrieval, and fast computing. In all topics, we describe the state-of-the-art, provide illustrative examples, and point to future challenges and research directions.",
"title": ""
},
{
"docid": "63af822cd877b95be976f990b048f90c",
"text": "We propose a method for generating classifier ensembles based on feature extraction. To create the training data for a base classifier, the feature set is randomly split into K subsets (K is a parameter of the algorithm) and principal component analysis (PCA) is applied to each subset. All principal components are retained in order to preserve the variability information in the data. Thus, K axis rotations take place to form the new features for a base classifier. The idea of the rotation approach is to encourage simultaneously individual accuracy and diversity within the ensemble. Diversity is promoted through the feature extraction for each base classifier. Decision trees were chosen here because they are sensitive to rotation of the feature axes, hence the name \"forest\". Accuracy is sought by keeping all principal components and also using the whole data set to train each base classifier. Using WEKA, we examined the rotation forest ensemble on a random selection of 33 benchmark data sets from the UCI repository and compared it with bagging, AdaBoost, and random forest. The results were favorable to rotation forest and prompted an investigation into diversity-accuracy landscape of the ensemble models. Diversity-error diagrams revealed that rotation forest ensembles construct individual classifiers which are more accurate than these in AdaBoost and random forest, and more diverse than these in bagging, sometimes more accurate as well",
"title": ""
}
] | [
{
"docid": "19364f2394650f8c3d899a5ceb2fc493",
"text": "In this paper, we study cost-sensitive semi-supervised learning where many of the training examples are unlabeled and different misclassification errors are associated with unequal costs. This scenario occurs in many real-world applications. For example, in some disease diagnosis, the cost of erroneously diagnosing a patient as healthy is much higher than that of diagnosing a healthy person as a patient. Also, the acquisition of labeled data requires medical diagnosis which is expensive, while the collection of unlabeled data such as basic health information is much cheaper. We propose the CS4VM (Cost-Sensitive Semi-Supervised Support Vector Machine) to address this problem. We show that the CS4VM, when given the label means of the unlabeled data, closely approximates the supervised cost-sensitive SVM that has access to the ground-truth labels of all the unlabeled data. This observation leads to an efficient algorithm which first estimates the label means and then trains the CS4VM with the plug-in label means by an efficient SVM solver. Experiments on a broad range of data sets show that the proposed method is capable of reducing the total cost and is computationally efficient.",
"title": ""
},
{
"docid": "5a74a585fb58ff09c05d807094523fb9",
"text": "Deep learning techniques are famous due to Its capability to cope with large-scale data these days. They have been investigated within various of applications e.g., language, graphical modeling, speech, audio, image recognition, video, natural language and signal processing areas. In addition, extensive researches applying machine-learning methods in Intrusion Detection System (IDS) have been done in both academia and industry. However, huge data and difficulties to obtain data instances are hot challenges to machine-learning-based IDS. We show some limitations of previous IDSs which uses classic machine learners and introduce feature learning including feature construction, extraction and selection to overcome the challenges. We discuss some distinguished deep learning techniques and its application for IDS purposes. Future research directions using deep learning techniques for IDS purposes are briefly summarized.",
"title": ""
},
{
"docid": "182bb07fb7dbbaf17b6c7a084f1c4fb2",
"text": "Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.",
"title": ""
},
{
"docid": "873e9eb826c0ae454db3032fc63f7073",
"text": "The purpose of this study is to explore Thai online customers' repurchase intention towards clothing. This study integrated Delone and Mclean's e-commerce success model to predict customers' repurchase intention to purchase clothing on the Internet. The data was collected using convenience sampling method with a survey of the customers in Thailand who had experienced purchasing clothing online. The findings indicate that repurchase intention is mostly influenced by both online shopping satisfaction and online shopping trust. The relationships between Internet shopping value and online shopping satisfaction and online shopping trust are found to be significant as well. Components of website quality have differing effect on utilitarian and hedonic value. System quality and service quickness influences utilitarian value as well as the hedonic value. System accessibility and information timely positively influence utilitarian value while information variety and service receptiveness have a positive effect hedonic value.",
"title": ""
},
{
"docid": "85cd0262fec2586740fe4199cf56c766",
"text": "New information on infectious diseases in older adults has become available in the past 20 years. In this review, in-depth discussions on the general problem of geriatric infectious diseases (epidemiology, pathogenesis, age-related host defenses, clinical manifestations, diagnostic approach); diagnosis and management of bacterial pneumonia, urinary tract infection, and Clostridium difficile infection; and the unique challenges of diagnosing and managing infections in a long-term care setting are presented.",
"title": ""
},
{
"docid": "061ac4487fba7837f44293a2d20b8dd9",
"text": "This paper describes a model of cooperative behavior and describes how such a model can be applied in a natural language understanding system. We assume that agents attempt to recognize the plans of other agents and, then, use this plan when deciding what response to make. In particular, we show that, given a setting in which purposeful dialogues occur, this model can account for responses that provide more information that explicitly requested and for appropriate responses to both short sentence fragments and indirect speech acts.",
"title": ""
},
{
"docid": "cdd27bbcbab81a243dda6bb855fb8f72",
"text": "The Internet of Things (IoT), which can be regarded as an enhanced version of machine-to-machine communication technology, was proposed to realize intelligent thing-to-thing communications by utilizing the Internet connectivity. In the IoT, \"things\" are generally heterogeneous and resource constrained. In addition, such things are connected to each other over low-power and lossy networks. In this paper, we propose an inter-device authentication and session-key distribution system for devices with only encryption modules. In the proposed system, unlike existing sensor-network environments where the key distribution center distributes the key, each sensor node is involved with the generation of session keys. In addition, in the proposed scheme, the performance is improved so that the authenticated device can calculate the session key in advance. The proposed mutual authentication and session-key distribution system can withstand replay attacks, man-in-the-middle attacks, and wiretapped secret-key attacks.",
"title": ""
},
{
"docid": "64d53035eb919d5e27daef6b666b7298",
"text": "The 3L-NPC (Neutral-Point-Clamped) is the most popular multilevel converter used in high-power medium-voltage applications. An important disadvantage of this structure is the unequal distribution of losses among the switches. The performances of 3L-NPC structure were improved by developing the 3L-ANPC (Active-NPC) converter which has more degrees of freedom. In this paper the switching states and the loss distribution problem are studied for different PWM strategies in a STATCOM application. The PSIM simulation results are shown in order to validate the PWM strategies studied for 3L-ANPC converter.",
"title": ""
},
{
"docid": "644d262f1d2f64805392c15506764558",
"text": "In this paper, we present a comprehensive survey of Markov Random Fields (MRFs) in computer vision and image understanding, with respect to the modeling, the inference and the learning. While MRFs were introduced into the computer vision eld about two decades ago, they started to become a ubiquitous tool for solving visual perception problems around the turn of the millennium following the emergence of efficient inference methods. During the past decade, a variety of MRF models as well as inference and learning methods have been developed for addressing numerous low, mid and high-level vision problems. While most of the literature concerns pairwise MRFs, in recent years we have also witnessed signi cant progress in higher-order MRFs, which substantially enhances the expressiveness of graph-based models and expands the domain of solvable problems. This survey provides a compact and informative summary of the major literature in this research topic.",
"title": ""
},
{
"docid": "9fb5db3cdcffb968b54c7d23d8a690a2",
"text": "BACKGROUND\nPhysical activity is associated with many physical and mental health benefits, however many children do not meet the national physical activity guidelines. While schools provide an ideal setting to promote children's physical activity, adding physical activity to the school day can be difficult given time constraints often imposed by competing key learning areas. Classroom-based physical activity may provide an opportunity to increase school-based physical activity while concurrently improving academic-related outcomes. The primary aim of this systematic review and meta-analysis was to evaluate the impact of classroom-based physical activity interventions on academic-related outcomes. A secondary aim was to evaluate the impact of these lessons on physical activity levels over the study duration.\n\n\nMETHODS\nA systematic search of electronic databases (PubMed, ERIC, SPORTDiscus, PsycINFO) was performed in January 2016 and updated in January 2017. Studies that investigated the association between classroom-based physical activity interventions and academic-related outcomes in primary (elementary) school-aged children were included. Meta-analyses were conducted in Review Manager, with effect sizes calculated separately for each outcome assessed.\n\n\nRESULTS\nThirty-nine articles met the inclusion criteria for the review, and 16 provided sufficient data and appropriate design for inclusion in the meta-analyses. Studies investigated a range of academic-related outcomes including classroom behaviour (e.g. on-task behaviour), cognitive functions (e.g. executive function), and academic achievement (e.g. standardised test scores). Results of the meta-analyses showed classroom-based physical activity had a positive effect on improving on-task and reducing off-task classroom behaviour (standardised mean difference = 0.60 (95% CI: 0.20,1.00)), and led to improvements in academic achievement when a progress monitoring tool was used (standardised mean difference = 1.03 (95% CI: 0.22,1.84)). However, no effect was found for cognitive functions (standardised mean difference = 0.33 (95% CI: -0.11,0.77)) or physical activity (standardised mean difference = 0.40 (95% CI: -1.15,0.95)).\n\n\nCONCLUSIONS\nResults suggest classroom-based physical activity may have a positive impact on academic-related outcomes. However, it is not possible to draw definitive conclusions due to the level of heterogeneity in intervention components and academic-related outcomes assessed. Future studies should consider the intervention period when selecting academic-related outcome measures, and use an objective measure of physical activity to determine intervention fidelity and effects on overall physical activity levels.",
"title": ""
},
{
"docid": "22d878a735d649f5932be6cd0b3979c9",
"text": "This study investigates the potential to introduce basic programming concepts to middle school children within the context of a classroom writing-workshop. In this paper we describe how students drafted, revised, and published their own digital stories using the introductory programming language Scratch and in the process learned fundamental CS concepts as well as the wider connection between programming and writing as interrelated processes of composition.",
"title": ""
},
{
"docid": "d8ead5d749b9af092adf626245e8178a",
"text": "This paper describes a LIN (Local Interconnect Network) Transmitter designed in a BCD HV technology. The key design target is to comply with EMI (electromagnetic interference) specification limits. The two main aspects are low EME (electromagnetic emission) and sufficient immunity against RF disturbance. A gate driver is proposed which uses a certain current summation network for lowering the slew rate on the one hand and being reliable against radio frequency (RF) disturbances within the automotive environment on the other hand. Nowadays the low cost single wire LIN Bus is used for establishing communication between sensors, actuators and other components.",
"title": ""
},
{
"docid": "f9e018fff97ac8ee91b68948cab52047",
"text": "How do humans navigate to target objects in novel scenes? Do we use the semantic/functional priors we have built over years to efficiently search and navigate? For example, to search for mugs, we search cabinets near the coffee machine and for fruits we try the fridge. In this work, we focus on incorporating semantic priors in the task of semantic navigation. We propose to use Graph Convolutional Networks for incorporating the prior knowledge into a deep reinforcement learning framework. The agent uses the features from the knowledge graph to predict the actions. For evaluation, we use the AI2-THOR framework. Our experiments show how semantic knowledge improves performance significantly. More importantly, we show improvement in generalization to unseen scenes and/or objects. The supplementary video can be accessed at the following link: https://youtu.be/otKjuO805dE .",
"title": ""
},
{
"docid": "b2f7826fe74d5bb3be8361aeb6ae41c4",
"text": "Skid steering of 4-wheel-drive electric vehicles has good maneuverability and mobility as a result of the application of differential torque to wheels on opposite sides. For path following, the paper utilizes the techniques of sliding mode control based on extended state observer which not only has robustness against the system dynamics not modeled and uncertain parameter but also reduces the switch gain effectively, so as to obtain a predictable behavior for the instantaneous center of rotation thus preventing excessive skidding. The efficiency of the algorithm is validated on a vehicle model with 14 degree of freedom. The simulation results show that the control law is robust against to the evaluation error of parameter and to the variation of the friction force within the wheel-ground interaction, what's more, it is easy to be carried out in controller.",
"title": ""
},
{
"docid": "2c5eb3fb74c6379dfd38c1594ebe85f4",
"text": "Accurately recognizing speaker emotion and age/gender from speech can provide better user experience for many spoken dialogue systems. In this study, we propose to use deep neural networks (DNNs) to encode each utterance into a fixed-length vector by pooling the activations of the last hidden layer over time. The feature encoding process is designed to be jointly trained with the utterance-level classifier for better classification. A kernel extreme learning machine (ELM) is further trained on the encoded vectors for better utterance-level classification. Experiments on a Mandarin dataset demonstrate the effectiveness of our proposed methods on speech emotion and age/gender recognition tasks.",
"title": ""
},
{
"docid": "44941e8f5b703bcacb51b6357cba7633",
"text": "Convolutional neural networks provide visual features that perform remarkably well in many computer vision applications. However, training these networks requires significant amounts of supervision. This paper introduces a generic framework to train deep networks, end-to-end, with no supervision. We propose to fix a set of target representations, called Noise As Targets (NAT), and to constrain the deep features to align to them. This domain agnostic approach avoids the standard unsupervised learning issues of trivial solutions and collapsing of features. Thanks to a stochastic batch reassignment strategy and a separable square loss function, it scales to millions of images. The proposed approach produces representations that perform on par with state-of-the-art unsupervised methods on ImageNet and PASCAL VOC.",
"title": ""
},
{
"docid": "8a8f310d13eea0fdb5b9c3b6f0a2818b",
"text": "In recent years, the rapid development of Internet, Internet of Things, and Cloud Computing have led to the explosive growth of data in almost every industry and business area. Big data has rapidly developed into a hot topic that attracts extensive attention from academia, industry, and governments around the world. In this position paper, we first briefly introduce the concept of big data, including its definition, features, and value. We then identify from different perspectives the significance and opportunities that big data brings to us. Next, we present representative big data initiatives all over the world. We describe the grand challenges (namely, data complexity, computational complexity, and system complexity), as well as possible solutions to address these challenges. Finally, we conclude the paper by presenting several suggestions on carrying out big data projects.",
"title": ""
},
{
"docid": "d8127fc372994baee6fd8632d585a347",
"text": "Dynamic query interfaces (DQIs) form a recently developed method of database access that provides continuous realtime feedback to the user during the query formulation process. Previous work shows that DQIs are elegant and powerful interfaces to small databases. Unfortunately, when applied to large databases, previous DQI algorithms slow to a crawl. We present a new approach to DQI algorithms that works well with large databases.",
"title": ""
},
{
"docid": "b54a359025d863f6e2f5236eb823e740",
"text": "We present a method for fusing two acquisition modes, 2D photographs and 3D LiDAR scans, for depth-layer decomposition of urban facades. The two modes have complementary characteristics: point cloud scans are coherent and inherently 3D, but are often sparse, noisy, and incomplete; photographs, on the other hand, are of high resolution, easy to acquire, and dense, but view-dependent and inherently 2D, lacking critical depth information. In this paper we use photographs to enhance the acquired LiDAR data. Our key observation is that with an initial registration of the 2D and 3D datasets we can decompose the input photographs into rectified depth layers. We decompose the input photographs into rectangular planar fragments and diffuse depth information from the corresponding 3D scan onto the fragments by solving a multi-label assignment problem. Our layer decomposition enables accurate repetition detection in each planar layer, using which we propagate geometry, remove outliers and enhance the 3D scan. Finally, the algorithm produces an enhanced, layered, textured model. We evaluate our algorithm on complex multi-planar building facades, where direct autocorrelation methods for repetition detection fail. We demonstrate how 2D photographs help improve the 3D scans by exploiting data redundancy, and transferring high level structural information to (plausibly) complete large missing regions.",
"title": ""
}
] | scidocsrr |
4f70a710f54f5b340055d06c8d703ee6 | Influence of immediate post-extraction socket irrigation on development of alveolar osteitis after mandibular third molar removal: a prospective split-mouth study, preliminary report | [
{
"docid": "accbfd3c4caade25329a2a5743559320",
"text": "PURPOSE\nThe purpose of this investigation was to assess the frequency of complications of third molar surgery, both intraoperatively and postoperatively, specifically for patients 25 years of age or older.\n\n\nMATERIALS AND METHODS\nThis prospective study evaluated 3,760 patients, 25 years of age or older, who were to undergo third molar surgery by oral and maxillofacial surgeons practicing in the United States. The predictor variables were categorized as demographic (age, gender), American Society of Anesthesiologists classification, chronic conditions and medical risk factors, and preoperative description of third molars (present or absent, type of impaction, abnormalities or association with pathology). Outcome variables were intraoperative and postoperative complications, as well as quality of life issues (days of work missed or normal activity curtailed). Frequencies for data collected were tabulated.\n\n\nRESULTS\nThe sample was provided by 63 surgeons, and was composed of 3,760 patients with 9,845 third molars who were 25 years of age or older, of which 8,333 third molars were removed. Alveolar osteitis was the most frequently encountered postoperative problem (0.2% to 12.7%). Postoperative inferior alveolar nerve anesthesia/paresthesia occurred with a frequency of 1.1% to 1.7%, while lingual nerve anesthesia/paresthesia was calculated as 0.3%. All other complications also occurred with a frequency of less than 1%.\n\n\nCONCLUSION\nThe findings of this study indicate that third molar surgery in patients 25 years of age or older is associated with minimal morbidity, a low incidence of postoperative complications, and minimal impact on the patients quality of life.",
"title": ""
}
] | [
{
"docid": "23676a52e1ed03d7b5c751a9986a7206",
"text": "Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of",
"title": ""
},
{
"docid": "c02f98ba21ed80995e810c77a6def394",
"text": "Forensic and Security Laboratory School of Computer Engineering, Nanyang Technological University, Block N4, Nanyang Avenue, Singapore 639798 Biometrics Research Centre Department of Computing, The Hong Kong Polytechnic University Kowloon, Hong Kong Pattern Analysis and Machine Intelligence Research Group Department of Electrical and Computer Engineering University of Waterloo, 200 University Avenue West, Ontario, Canada",
"title": ""
},
{
"docid": "85b1fe5c3d6d68791345d32eda99055b",
"text": "Surgery and other invasive therapies are complex interventions, the assessment of which is challenged by factors that depend on operator, team, and setting, such as learning curves, quality variations, and perception of equipoise. We propose recommendations for the assessment of surgery based on a five-stage description of the surgical development process. We also encourage the widespread use of prospective databases and registries. Reports of new techniques should be registered as a professional duty, anonymously if necessary when outcomes are adverse. Case series studies should be replaced by prospective development studies for early technical modifications and by prospective research databases for later pre-trial evaluation. Protocols for these studies should be registered publicly. Statistical process control techniques can be useful in both early and late assessment. Randomised trials should be used whenever possible to investigate efficacy, but adequate pre-trial data are essential to allow power calculations, clarify the definition and indications of the intervention, and develop quality measures. Difficulties in doing randomised clinical trials should be addressed by measures to evaluate learning curves and alleviate equipoise problems. Alternative prospective designs, such as interrupted time series studies, should be used when randomised trials are not feasible. Established procedures should be monitored with prospective databases to analyse outcome variations and to identify late and rare events. Achievement of improved design, conduct, and reporting of surgical research will need concerted action by editors, funders of health care and research, regulatory bodies, and professional societies.",
"title": ""
},
{
"docid": "d961378b22aae8d793b38c40b66318de",
"text": "Socio-economic hardships put children in an underprivileged position. This systematic review was conducted to identify factors linked to underachievement of disadvantaged pupils in school science and maths. What could be done as evidence-based practice to make the lives of these young people better? The protocol from preferred reporting items for systematic reviews and meta-analyses (PRISMA) was followed. Major electronic educational databases were searched. Papers meeting pre-defined selection criteria were identified. Studies included were mainly large-scale evaluations with a clearly defined comparator group and robust research design. All studies used a measure of disadvantage such as lower SES, language barrier, ethnic minority or temporary immigrant status and an outcome measure like attainment in standardised national tests. A majority of papers capable of answering the research question were correlational studies. The review reports findings from 771 studies published from 2005 to 2014 in English language. Thirtyfour studies were synthesised. Results suggest major factors linking deprivation to underachievement can be thematically categorised into a lack of positive environment and support. Recommendations from the research reports are discussed. Subjects: Behavioral Sciences; Education; International & Comparative Education; Social Sciences",
"title": ""
},
{
"docid": "9e4da48d0fa4c7ff9566f30b73da3dc3",
"text": "Yang Song; Robert van Boeschoten University of Amsterdam Plantage Muidergracht 12, 1018 TV Amsterdam, the Netherlands y.song@uva.nl; r.m.van.boeschoten@hva.nl Abstract: Crowdfunding has been used as one of the effective ways for entrepreneurs to raise funding especially in creative industries. Individuals as well as organizations are paying more attentions to the emergence of new crowdfunding platforms. In the Netherlands, the government is also trying to help artists access financial resources through crowdfunding platforms. This research aims at discovering the success factors for crowdfunding projects from both founders’ and funders’ perspective. We designed our own website for founders and funders to observe crowdfunding behaviors. We linked our self-designed website to Google analytics in order to collect our data. Our research will contribute to crowdfunding success factors and provide practical recommendations for practitioners and researchers.",
"title": ""
},
{
"docid": "9779c9f4f15d9977a20592cabb777059",
"text": "Expert search or recommendation involves the retrieval of people (experts) in response to a query and on occasion, a given set of constraints. In this paper, we address expert recommendation in academic domains that are different from web and intranet environments studied in TREC. We propose and study graph-based models for expertise retrieval with the objective of enabling search using either a topic (e.g. \"Information Extraction\") or a name (e.g. \"Bruce Croft\"). We show that graph-based ranking schemes despite being \"generic\" perform on par with expert ranking models specific to topic-based and name-based querying.",
"title": ""
},
{
"docid": "df6c7f13814178d7b34703757899d6b1",
"text": "Regression testing of natural language systems is problematic for two main reasons: component input and output is complex, and system behaviour is context-dependent. We have developed a generic approach which solves both of these issues. We describe our regression tool, CONTEST, which supports context-dependent testing of dialogue system components, and discuss the regression test sets we developed, designed to effectively isolate components from changes and problems earlier in the pipeline. We believe that the same approach can be used in regression testing for other dialogue systems, as well as in testing any complex NLP system containing multiple components.",
"title": ""
},
{
"docid": "7b5331b0e6ad693fc97f5f3b543bf00c",
"text": "Relational learning deals with data that are characterized by relational structures. An important task is collective classification, which is to jointly classify networked objects. While it holds a great promise to produce a better accuracy than noncollective classifiers, collective classification is computational challenging and has not leveraged on the recent breakthroughs of deep learning. We present Column Network (CLN), a novel deep learning model for collective classification in multirelational domains. CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) longrange, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient, linear in the size of the network and the number of relations. We evaluate CLN on multiple real-world applications: (a) delay prediction in software projects, (b) PubMed Diabetes publication classification and (c) film genre classification. In all applications, CLN demonstrates a higher accuracy than state-of-the-art rivals.",
"title": ""
},
{
"docid": "ea6eecdaed8e76c28071ad1d9c1c39f9",
"text": "When it comes to taking the public transportation, time and patience are of essence. In other words, many people using public transport buses have experienced time loss because of waiting at the bus stops. In this paper, we proposed smart bus tracking system that any passenger with a smart phone or mobile device with the QR (Quick Response) code reader can scan QR codes placed at bus stops to view estimated bus arrival times, buses' current locations, and bus routes on a map. Anyone can access these maps and have the option to sign up to receive free alerts about expected bus arrival times for the interested buses and related routes via SMS and e-mails. We used C4.5 (a statistical classifier) algorithm for the estimation of bus arrival times to minimize the passengers waiting time. GPS (Global Positioning System) and Google Maps are used for navigation and display services, respectively.",
"title": ""
},
{
"docid": "35a85d6652bd333d93f8112aff83ab83",
"text": "For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation benchmark (GLUE), a tool for evaluating and analyzing the performance of models across a diverse range of existing NLU tasks. GLUE is modelagnostic, but it incentivizes sharing knowledge across tasks because certain tasks have very limited training data. We further provide a hand-crafted diagnostic test suite that enables detailed linguistic analysis of NLU models. We evaluate baselines based on current methods for multi-task and transfer learning and find that they do not immediately give substantial improvements over the aggregate performance of training a separate model per task, indicating room for improvement in developing general and robust NLU systems.",
"title": ""
},
{
"docid": "587f6e73ca6653860cda66238d2ba146",
"text": "Cable-suspended robots are structurally similar to parallel actuated robots but with the fundamental difference that cables can only pull the end-effector but not push it. From a scientific point of view, this feature makes feedback control of cable-suspended robots more challenging than their counterpart parallel-actuated robots. In the case with redundant cables, feedback control laws can be designed to make all tensions positive while attaining desired control performance. This paper presents approaches to design positive tension controllers for cable suspended robots with redundant cables. Their effectiveness is demonstrated through simulations and experiments on a three degree-of-freedom cable suspended robots.",
"title": ""
},
{
"docid": "0829cf1fb1654525627fdc61d1814196",
"text": "The selection of indexing terms for representing documents is a key decision that limits how effective subsequent retrieval can be. Often stemming algorithms are used to normalize surface forms, and thereby address the problem of not finding documents that contain words related to query terms through infectional or derivational morphology. However, rule-based stemmers are not available for every language and it is unclear which methods for coping with morphology are most effective. In this paper we investigate an assortment of techniques for representing text and compare these approaches using data sets in eighteen languages and five different writing systems.\n We find character n-gram tokenization to be highly effective. In half of the languages examined n-grams outperform unnormalized words by more than 25%; in highly infective languages relative improvements over 50% are obtained. In languages with less morphological richness the choice of tokenization is not as critical and rule-based stemming can be an attractive option, if available. We also conducted an experiment to uncover the source of n-gram power and a causal relationship between the morphological complexity of a language and n-gram effectiveness was demonstrated.",
"title": ""
},
{
"docid": "bc7c5ab8ec28e9a5917fc94b776b468a",
"text": "Reasonable house price prediction is a meaningful task, and the house clustering is an important process in the prediction. In this paper, we propose the method of Multi-Scale Affinity Propagation(MSAP) aggregating the house appropriately by the landmark and the facility. Then in each cluster, using Linear Regression model with Normal Noise(LRNN) predicts the reasonable price, which is verified by the increasing number of the renting reviews. Experiments show that the precision of the reasonable price prediction improved greatly via the method of MSAP.",
"title": ""
},
{
"docid": "592ceee67b3f8b3e8333cb104f56bd2f",
"text": "The goal of this paper is to study the team formation of multiple UAVs and UGVs for collaborative surveillance and crowd control under uncertain scenarios (e.g. crowd splitting). A comprehensive and coherent dynamic data driven adaptive multi-scale simulation (DDDAMS) framework is adopted, with the focus on simulation-based planning and control strategies related to the surveillance problem considered in this paper. To enable the team formation of multiple UAVs and UGVs, a two stage approach involving 1) crowd clustering and 2) UAV/UGV team assignment is proposed during the system operations by considering the geometry of the crowd clusters and solving a multi-objective optimization problem. For the experiment, an integrated testbed has been developed based on agent-based hardware-in-the-loop simulation involving seamless communications among simulated and real vehicles. Preliminary results indicate the effectiveness and efficiency of the proposed approach for the team formation of multiple UAVs and UGVs.",
"title": ""
},
{
"docid": "aa69409c1bddc7693ba2ed36206ac767",
"text": "Popularity of data-driven software engineering has led to an increasing demand on the infrastructures to support efficient execution of tasks that require deeper source code analysis. While task optimization and parallelization are the adopted solutions, other research directions are less explored. We present collective program analysis (CPA), a technique for scaling large scale source code analyses, especially those that make use of control and data flow analysis, by leveraging analysis specific similarity. Analysis specific similarity is about, whether two or more programs can be considered similar for a given analysis. The key idea of collective program analysis is to cluster programs based on analysis specific similarity, such that running the analysis on one candidate in each cluster is sufficient to produce the result for others. For determining analysis specific similarity and clustering analysis-equivalent programs, we use a sparse representation and a canonical labeling scheme. Our evaluation shows that for a variety of source code analyses on a large dataset of programs, substantial reduction in the analysis time can be achieved; on average a 69% reduction when compared to a baseline and on average a 36% reduction when compared to a prior technique. We also found that a large amount of analysis-equivalent programs exists in large datasets.",
"title": ""
},
{
"docid": "8f570416ceecf87310b7780ec935d814",
"text": "BACKGROUND\nInguinal lymph node involvement is an important prognostic factor in penile cancer. Inguinal lymph node dissection allows staging and treatment of inguinal nodal disease. However, it causes morbidity and is associated with complications, such as lymphocele, skin loss and infection. Video Endoscopic Inguinal Lymphadenectomy (VEIL) is an endoscopic procedure, and it seems to be a new and attractive approach duplicating the standard open procedure with less morbidity. We present here a critical perioperative assessment with points of technique.\n\n\nMETHODS\nTen patients with moderate to high grade penile carcinoma with clinically negative inguinal lymph nodes were subjected to elective VEIL. VEIL was done in standard surgical steps. Perioperative parameters were assessed that is - duration of the surgery, lymph-related complications, time until drain removal, lymph node yield, surgical emphysema and histopathological positivity of lymph nodes.\n\n\nRESULTS\nOperative time for VEIL was 120 to 180 minutes. Lymph node yield was 7 to 12 lymph nodes. No skin related complications were seen with VEIL. Lymph related complications, that is, lymphocele, were seen in only two patients. The suction drain was removed after four to eight days (mean 5.1). Overall morbidity was 20% with VEIL.\n\n\nCONCLUSION\nIn our early experience, VEIL was a safe and feasible technique in patients with penile carcinoma with non palpable inguinal lymph nodes. It allows the removal of inguinal lymph nodes within the same limits as in conventional surgical dissection and potentially reduces surgical morbidity.",
"title": ""
},
{
"docid": "f1ae820d7e067dabfda5efc1229762d8",
"text": "Data from 574 participants were used to assess perceptions of message, site, and sponsor credibility across four genres of websites; to explore the extent and effects of verifying web-based information; and to measure the relative influence of sponsor familiarity and site attributes on perceived credibility.The results show that perceptions of credibility differed, such that news organization websites were rated highest and personal websites lowest, in terms of message, sponsor, and overall site credibility, with e-commerce and special interest sites rated between these, for the most part.The results also indicated that credibility assessments appear to be primarily due to website attributes (e.g. design features, depth of content, site complexity) rather than to familiarity with website sponsors. Finally, there was a negative relationship between self-reported and observed information verification behavior and a positive relationship between self-reported verification and internet/web experience. The findings are used to inform the theoretical development of perceived web credibility. 319 new media & society Copyright © 2007 SAGE Publications Los Angeles, London, New Delhi and Singapore Vol9(2):319–342 [DOI: 10.1177/1461444807075015] ARTICLE 319-342 NMS-075015.qxd 9/3/07 11:54 AM Page 319 © 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. at Universiteit van Amsterdam SAGE on April 25, 2007 http://nms.sagepub.com Downloaded from",
"title": ""
},
{
"docid": "18ce27c1840596779805efaeec18f3ed",
"text": "Accurate inversion of land surface geo/biophysical variables from remote sensing data for earth observation applications is an essential and challenging topic for the global change research. Land surface temperature (LST) is one of the key parameters in the physics of earth surface processes from local to global scales. The importance of LST is being increasingly recognized and there is a strong interest in developing methodologies to measure LST from the space. Landsat 8 Thermal Infrared Sensor (TIRS) is the newest thermal infrared sensor for the Landsat project, providing two adjacent thermal bands, which has a great benefit for the LST inversion. In this paper, we compared three different approaches for LST inversion from TIRS, including the radiative transfer equation-based method, the split-window algorithm and the single channel method. Four selected energy balance monitoring sites from the Surface Radiation Budget Network (SURFRAD) were used for validation, combining with the MODIS 8 day emissivity product. For the investigated sites and scenes, results show that the LST inverted from the radiative transfer equation-based method using band 10 has the highest accuracy with RMSE lower than 1 K, while the SW algorithm has moderate accuracy and the SC method has the lowest accuracy. OPEN ACCESS Remote Sens. 2014, 6 9830",
"title": ""
},
{
"docid": "ba66e377db4ef2b3c626a0a2f19da8c3",
"text": "A challenging aspect of scene text recognition is to handle text with distortions or irregular layout. In particular, perspective text and curved text are common in natural scenes and are difficult to recognize. In this work, we introduce ASTER, an end-to-end neural network model that comprises a rectification network and a recognition network. The rectification network adaptively transforms an input image into a new one, rectifying the text in it. It is powered by a flexible Thin-Plate Spline transformation which handles a variety of text irregularities and is trained without human annotations. The recognition network is an attentional sequence-to-sequence model that predicts a character sequence directly from the rectified image. The whole model is trained end to end, requiring only images and their groundtruth text. Through extensive experiments, we verify the effectiveness of the rectification and demonstrate the state-of-the-art recognition performance of ASTER. Furthermore, we demonstrate that ASTER is a powerful component in end-to-end recognition systems, for its ability to enhance the detector.",
"title": ""
}
] | scidocsrr |
db1f28c1a4323643072202e07c1b18cf | Predicting the direction of stock market prices using random forest | [
{
"docid": "43ff7d61119cc7b467c58c9c2e063196",
"text": "Financial engineering such as trading decision is an emerging research area and also has great commercial potentials. A successful stock buying/selling generally occurs near price trend turning point. Traditional technical analysis relies on some statistics (i.e. technical indicators) to predict turning point of the trend. However, these indicators can not guarantee the accuracy of prediction in chaotic domain. In this paper, we propose an intelligent financial trading system through a new approach: learn trading strategy by probabilistic model from high-level representation of time series – turning points and technical indicators. The main contributions of this paper are two-fold. First, we utilize high-level representation (turning point and technical indicators). High-level representation has several advantages such as insensitive to noise and intuitive to human being. However, it is rarely used in past research. Technical indicator is the knowledge from professional investors, which can generally characterize the market. Second, by combining high-level representation with probabilistic model, the randomness and uncertainty of chaotic system is further reduced. In this way, we achieve great results (comprehensive experiments on S&P500 components) in a chaotic domain in which the prediction is thought impossible in the past. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "b86dd4b34965b15af417da275de761c4",
"text": "This article considered the problem of designing joint-actuation mechanisms that can allow fast and accurate operation of a robot arm, while guaranteeing a suitably limited level of injury risk. Different approaches to the problem were presented, and a method of performance evaluation was proposed based on minimum-time optimal control with safety constraints. The variable stiffness transmission (VST) scheme was found to be one of a few different possible schemes that allows the most flexibility and potential performance. Some aspects related to the implementation of the mechanics and control of VST actuation were also reported.",
"title": ""
},
{
"docid": "2bb356ac7620bacc9190f73f92b04da1",
"text": "It is well known that it is possible to construct “adversarial examples” for neural networks: inputs which are misclassified by the network yet indistinguishable from true data. We propose a simple modification to standard neural network architectures, thermometer encoding, which significantly increases the robustness of the network to adversarial examples. We demonstrate this robustness with experiments on the MNIST, CIFAR-10, CIFAR-100, and SVHN datasets, and show that models with thermometer-encoded inputs consistently have higher accuracy on adversarial examples, without decreasing generalization. State-of-the-art accuracy under the strongest known white-box attack was increased from 93.20% to 94.30% on MNIST and 50.00% to 79.16% on CIFAR-10. We explore the properties of these networks, providing evidence that thermometer encodings help neural networks to find more-non-linear decision boundaries.",
"title": ""
},
{
"docid": "e2d25382acd23c9431ccd3905d8bf13a",
"text": "Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.",
"title": ""
},
{
"docid": "098625ba59c97d704ae85aa2e6776919",
"text": "A CDTA-based quadrature oscillator circuit is proposed. The circuit employs two current-mode allpass sections in a loop, and provides high-frequency sinusoidal oscillations in quadrature at high impedance output terminals of the CDTAs. The circuit has no floating capacitors, which is advantageous from the integrated circuit manufacturing point of view. Moreover, the oscillation frequency of this configuration can be made adjustable by using voltage controlled elements (MOSFETs), since the resistors in the circuit are either grounded or virtually grounded.",
"title": ""
},
{
"docid": "cadc31481c83e7fc413bdfb5d7bfd925",
"text": "A hierarchical model of approach and avoidance achievement motivation was proposed and tested in a college classroom. Mastery, performance-approach, and performance-avoidance goals were assessed and their antecedents and consequences examined. Results indicated that mastery goals were grounded in achievement motivation and high competence expectancies; performance-avoidance goals, in fear of failure and low competence expectancies; and performance-approach goals, in ach.ievement motivation, fear of failure, and high competence expectancies. Mastery goals facilitated intrinsic motivation, performance-approach goals enhanced graded performance, and performanceavoidance goals proved inimical to both intrinsic motivation and graded performance. The proposed model represents an integration of classic and contemporary approaches to the study of achievement motivation.",
"title": ""
},
{
"docid": "daa74311dafd227aa4ca0ae7ccabf12f",
"text": "Memristive devices are novel structures, developed primarily as memory. Another interesting application for memristive devices is logic circuits. In this paper, MRL (Memristor Ratioed Logic) - a hybrid CMOS-memristive logic family - is described. In this logic family, OR and AND logic gates are based on memristive devices, and CMOS inverters are added to provide a complete logic structure and signal restoration. Unlike previously published memristive-based logic families, the MRL family is compatible with standard CMOS logic. A case study of an eight-bit full adder is presented and related design considerations are discussed.",
"title": ""
},
{
"docid": "751b2a0e7b39e005d1664b302f84b08d",
"text": "The classification of a synthetic aperture radar (SAR) image is a significant yet challenging task, due to the presence of speckle noises and the absence of effective feature representation. Inspired by deep learning technology, a novel deep supervised and contractive neural network (DSCNN) for SAR image classification is proposed to overcome these problems. In order to extract spatial features, a multiscale patch-based feature extraction model that consists of gray level-gradient co-occurrence matrix, Gabor, and histogram of oriented gradient descriptors is developed to obtain primitive features from the SAR image. Then, to get discriminative representation of initial features, the DSCNN network that comprises four layers of supervised and contractive autoencoders is proposed to optimize features for classification. The supervised penalty of the DSCNN can capture the relevant information between features and labels, and the contractive restriction aims to enhance the locally invariant and robustness of the encoding representation. Consequently, the DSCNN is able to produce effective representation of sample features and provide superb predictions of the class labels. Moreover, to restrain the influence of speckle noises, a graph-cut-based spatial regularization is adopted after classification to suppress misclassified pixels and smooth the results. Experiments on three SAR data sets demonstrate that the proposed method is able to yield superior classification performance compared with some related approaches.",
"title": ""
},
{
"docid": "e0632c0bb393eb567f8bcc21468742b2",
"text": "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.",
"title": ""
},
{
"docid": "963eb2a6225a1f320489a504f8010e94",
"text": "A method for recognizing the emotion states of subjects based on 30 features extracted from their Galvanic Skin Response (GSR) signals was proposed. GSR signals were acquired by means of experiments attended by those subjects. Next the data was normalized with the calm signal of the same subject after being de-noised. Then the normalized data were extracted features before the step of feature selection. Immune Hybrid Particle Swarm Optimization (IH-PSO) was proposed to select the feature subsets of different emotions. Classifier for feature selection was evaluated on the correct recognition as well as number of the selected features. At last, this paper verified the effectiveness of the feature subsets selected with another new data. All performed in this paper illustrate that IH-PSO can achieve much effective results, and further more, demonstrate that there is significant emotion information in GSR signal.",
"title": ""
},
{
"docid": "3af338a01d1419189b7706375feec0c2",
"text": "Like E. Paul Torrance, my colleagues and I have tried to understand the nature of creativity, to assess it, and to improve instruction by teaching for creativity as well as teaching students to think creatively. This article reviews our investment theory of creativity, propulsion theory of creative contributions, and some of the data we have collected with regard to creativity. It also describes the propulsion theory of creative contributions. Finally, it draws",
"title": ""
},
{
"docid": "2ab741f14039379cbde7c16c6d99b963",
"text": "Melanoma, a malignant form of skin cancer is very threatening to life. Diagnosis of melanoma at an earlier stage is highly needed as it has a very high cure rate. Benign and malignant forms of skin cancer can be detected by analyzing the lesions present on the surface of the skin using dermoscopic images. In this work, an automated skin lesion detection system has been developed which learns the representation of the image using Google’s pretrained CNN model known as Inception-v3 [1]. After obtaining the representation vector for our input dermoscopic images we have trained two layer feed forward neural network to classify the images as malignant or benign. The system also classifies the images based on the cause of the cancer either due to melanocytic or non-melanocytic cells using a different neural network. These classification tasks are part of the challenge organized by International Skin Imaging Collaboration (ISIC) 2017. Our system learns to classify the images based on the model built using the training images given in the challenge and the experimental results were evaluated using validation and test sets. Our system has achieved an overall accuracy of 65.8% for the validation set.",
"title": ""
},
{
"docid": "4e734f8e7d3ac7249ce7eb4ad5833c95",
"text": "Conventional sports training emphasizes adequate training of muscle fibres, of cardiovascular conditioning and/or neuromuscular coordination. Most sports-associated overload injuries however occur within elements of the body wide fascial net, which are then loaded beyond their prepared capacity. This tensional network of fibrous tissues includes dense sheets such as muscle envelopes, aponeuroses, as well as specific local adaptations, such as ligaments or tendons. Fibroblasts continually but slowly adapt the morphology of these tissues to repeatedly applied challenging loading stimulations. Principles of a fascia oriented training approach are introduced. These include utilization of elastic recoil, preparatory counter movement, slow and dynamic stretching, as well as rehydration practices and proprioceptive refinement. Such training should be practiced once or twice a week in order to yield in a more resilient fascial body suit within a time frame of 6-24 months. Some practical examples of fascia oriented exercises are presented.",
"title": ""
},
{
"docid": "afcb6c9130e16002100ff68f68d98ff3",
"text": "This study characterizes adults who report being physically abused during childhood, and examines associations of reported type and frequency of abuse with adult mental health. Data were derived from the 2000-2001 and 2004-2005 National Epidemiologic Survey on Alcohol and Related Conditions, a large cross-sectional survey of a representative sample (N = 43,093) of the U.S. population. Weighted means, frequencies, and odds ratios of sociodemographic correlates and prevalence of psychiatric disorders were computed. Logistic regression models were used to examine the strength of associations between child physical abuse and adult psychiatric disorders adjusted for sociodemographic characteristics, other childhood adversities, and comorbid psychiatric disorders. Child physical abuse was reported by 8% of the sample and was frequently accompanied by other childhood adversities. Child physical abuse was associated with significantly increased adjusted odds ratios (AORs) of a broad range of DSM-IV psychiatric disorders (AOR = 1.16-2.28), especially attention-deficit hyperactivity disorder, posttraumatic stress disorder, and bipolar disorder. A dose-response relationship was observed between frequency of abuse and several adult psychiatric disorder groups; higher frequencies of assault were significantly associated with increasing adjusted odds. The long-lasting deleterious effects of child physical abuse underscore the urgency of developing public health policies aimed at early recognition and prevention.",
"title": ""
},
{
"docid": "bf126b871718a5ee09f1e54ea5052d20",
"text": "Deep fully convolutional neural network (FCN) based architectures have shown great potential in medical image segmentation. However, such architectures usually have millions of parameters and inadequate number of training samples leading to over-fitting and poor generalization. In this paper, we present a novel DenseNet based FCN architecture for cardiac segmentation which is parameter and memory efficient. We propose a novel up-sampling path which incorporates long skip and short-cut connections to overcome the feature map explosion in conventional FCN based architectures. In order to process the input images at multiple scales and view points simultaneously, we propose to incorporate Inception module's parallel structures. We propose a novel dual loss function whose weighting scheme allows to combine advantages of cross-entropy and Dice loss leading to qualitative improvements in segmentation. We demonstrate computational efficacy of incorporating conventional computer vision techniques for region of interest detection in an end-to-end deep learning based segmentation framework. From the segmentation maps we extract clinically relevant cardiac parameters and hand-craft features which reflect the clinical diagnostic analysis and train an ensemble system for cardiac disease classification. We validate our proposed network architecture on three publicly available datasets, namely: (i) Automated Cardiac Diagnosis Challenge (ACDC-2017), (ii) Left Ventricular segmentation challenge (LV-2011), (iii) 2015 Kaggle Data Science Bowl cardiac challenge data. Our approach in ACDC-2017 challenge stood second place for segmentation and first place in automated cardiac disease diagnosis tasks with an accuracy of 100% on a limited testing set (n=50). In the LV-2011 challenge our approach attained 0.74 Jaccard index, which is so far the highest published result in fully automated algorithms. In the Kaggle challenge our approach for LV volume gave a Continuous Ranked Probability Score (CRPS) of 0.0127, which would have placed us tenth in the original challenge. Our approach combined both cardiac segmentation and disease diagnosis into a fully automated framework which is computationally efficient and hence has the potential to be incorporated in computer-aided diagnosis (CAD) tools for clinical application.",
"title": ""
},
{
"docid": "5950aadef33caa371f0de304b2b4869d",
"text": "Responding to a 2015 MISQ call for research on service innovation, this study develops a conceptual model of service innovation in higher education academic libraries. Digital technologies have drastically altered the delivery of information services in the past decade, raising questions about critical resources, their interaction with digital technologies, and the value of new services and their measurement. Based on new product development (NPD) and new service development (NSD) processes and the service-dominant logic (SDL) perspective, this research-in-progress presents a conceptual model that theorizes interactions between critical resources and digital technologies in an iterative process for delivery of service innovation in academic libraries. The study also suggests future research paths to confirm, expand, and validate the new service innovation model.",
"title": ""
},
{
"docid": "b1422b2646f02a5a84a6a4b13f5ae7d8",
"text": "Two experiments examined the influence of timbre on auditory stream segregation. In experiment 1, listeners heard sequences of orchestral tones equated for pitch and loudness, and they rated how strongly the instruments segregated. Multidimensional scaling analyses of these ratings revealed that segregation was based on the static and dynamic acoustic attributes that influenced similarity judgements in a previous experiment (P Iverson & CL Krumhansl, 1993). In Experiment 2, listeners heard interleaved melodies and tried to recognize the melodies played by a target timbre. The results extended the findings of Experiment 1 to tones varying pitch. Auditory stream segregation appears to be influenced by gross differences in static spectra and by dynamic attributes, including attack duration and spectral flux. These findings support a gestalt explanation of stream segregation and provide evidence against peripheral channel model.",
"title": ""
},
{
"docid": "7a62e5e29b9450280391a95145216877",
"text": "We propose a deep feed-forward neural network architecture for pixel-wise semantic scene labeling. It uses a novel recursive neural network architecture for context propagation, referred to as rCPN. It first maps the local visual features into a semantic space followed by a bottom-up aggregation of local information into a global representation of the entire image. Then a top-down propagation of the aggregated information takes place that enhances the contextual information of each local feature. Therefore, the information from every location in the image is propagated to every other location. Experimental results on Stanford background and SIFT Flow datasets show that the proposed method outperforms previous approaches. It is also orders of magnitude faster than previous methods and takes only 0.07 seconds on a GPU for pixel-wise labeling of a 256 x 256 image starting from raw RGB pixel values, given the super-pixel mask that takes an additional 0.3 seconds using an off-the-shelf implementation.",
"title": ""
},
{
"docid": "3451c521dd27c90c324f66360991178c",
"text": "Compliant motion of a manipulator occurs when the manipulator position is constrained by the task geometry. Compliant motion may be produced either by a passive mechanical compliance built in to the manipulator, or by an active compliance implemented in the control servo loop. The second method, called force control, is the subject of this paper. In particular a theory of force control based on formal models of the manipulator and the task geometry is presented. The ideal effector is used to model the manipulator, the ideal surface is used to model the task geometry, and the goal trajectory is used to model the desired behavior of the manipulator. Models are also defined for position control and force control, providing a precise semantics for compliant motion primitives in manipulation programming languages. The formalism serves as a simple interface between the manipulator and the programmer, isolating the programmer from the fundamental complexity of low-level manipulator control. A method of automatically synthesizing a restricted class of manipulator programs based on the formal models of task and goal trajectory is also provided by the formalism.",
"title": ""
},
{
"docid": "e685a22b6f7b20fb1289923e86e467c5",
"text": "Nowadays, with the growth in the use of search engines, the extension of spying programs and anti -terrorism prevention, several researches focused on text analysis. In this sense, lemmatization and stemming are two common requirements of these researches. They include reducing different grammatical forms of a word and bring them to a common base form. In what follows, we will discuss these treatment methods on arabic text, especially the Khoja Stemmer, show their limits and provide new tools to improve it.",
"title": ""
},
{
"docid": "ca9f48691e93b6282df2277f4cf8885e",
"text": "This paper presents a novel technique, anatomy, for publishing sensitive data. Anatomy releases all the quasi-identifier and sensitive values directly in two separate tables. Combined with a grouping mechanism, this approach protects privacy, and captures a large amount of correlation in the microdata. We develop a linear-time algorithm for computing anatomized tables that obey the l-diversity privacy requirement, and minimize the error of reconstructing the microdata. Extensive experiments confirm that our technique allows significantly more effective data analysis than the conventional publication method based on generalization. Specifically, anatomy permits aggregate reasoning with average error below 10%, which is lower than the error obtained from a generalized table by orders of magnitude.",
"title": ""
}
] | scidocsrr |
124e4bf43f120613c8532b111157ea96 | Encrypted accelerated least squares regression | [
{
"docid": "4e0e6ca2f4e145c17743c42944da4cc8",
"text": "We demonstrate that, by using a recently proposed leveled homomorphic encryption scheme, it is possible to delegate the execution of a machine learning algorithm to a computing service while retaining confidentiality of the training and test data. Since the computational complexity of the homomorphic encryption scheme depends primarily on the number of levels of multiplications to be carried out on the encrypted data, we define a new class of machine learning algorithms in which the algorithm’s predictions, viewed as functions of the input data, can be expressed as polynomials of bounded degree. We propose confidential algorithms for binary classification based on polynomial approximations to least-squares solutions obtained by a small number of gradient descent steps. We present experimental validation of the confidential machine learning pipeline and discuss the trade-offs regarding computational complexity, prediction accuracy and cryptographic security.",
"title": ""
},
{
"docid": "ef444570c043be67453317e26600972f",
"text": "In multiple regression it is shown that parameter estimates based on minimum residual sum of squares have a high probability of being unsatisfactory, if not incorrect, if the prediction vectors are not orthogonal. Proposed is an estimation procedure based on adding small positive quantities to the diagonal of X’X. Introduced is the ridge trace, a method for showing in two dimensions the effects of nonorthogonality. It is then shown how to augment X’X to obtain biased estimates with smaller mean square error.",
"title": ""
}
] | [
{
"docid": "7432009332e13ebc473c9157505cb59c",
"text": "The use of future contextual information is typically shown to be helpful for acoustic modeling. However, for the recurrent neural network (RNN), it’s not so easy to model the future temporal context effectively, meanwhile keep lower model latency. In this paper, we attempt to design a RNN acoustic model that being capable of utilizing the future context effectively and directly, with the model latency and computation cost as low as possible. The proposed model is based on the minimal gated recurrent unit (mGRU) with an input projection layer inserted in it. Two context modules, temporal encoding and temporal convolution, are specifically designed for this architecture to model the future context. Experimental results on the Switchboard task and an internal Mandarin ASR task show that, the proposed model performs much better than long short-term memory (LSTM) and mGRU models, whereas enables online decoding with a maximum latency of 170 ms. This model even outperforms a very strong baseline, TDNN-LSTM, with smaller model latency and almost half less parameters.",
"title": ""
},
{
"docid": "4eca3018852fd3107cb76d1d95f76a0a",
"text": "Within the past decade, empirical evidence has emerged supporting the use of Acceptance and Commitment Therapy (ACT) targeting shame and self-stigma. Little is known about the role of self-compassion in ACT, but evidence from other approaches indicates that self-compassion is a promising means of reducing shame and self-criticism. The ACT processes of defusion, acceptance, present moment, values, committed action, and self-as-context are to some degree inherently self-compassionate. However, it is not yet known whether the self-compassion inherent in the ACT approach explains ACT’s effectiveness in reducing shame and stigma, and/or whether focused self-compassion work may improve ACT outcomes for highly self-critical, shame-prone people. We discuss how ACT for shame and stigma may be enhanced by existing approaches specifically targeting self-compassion.",
"title": ""
},
{
"docid": "8ef1592544071c485d82c0848d02a2d0",
"text": "Auditory beat stimulation may be a promising new tool for the manipulation of cognitive processes and the modulation of mood states. Here, we aim to review the literature examining the most current applications of auditory beat stimulation and its targets. We give a brief overview of research on auditory steady-state responses and its relationship to auditory beat stimulation (ABS). We have summarized relevant studies investigating the neurophysiological changes related to ABS and how they impact upon the design of appropriate stimulation protocols. Focusing on binaural-beat stimulation, we then discuss the role of monaural- and binaural-beat frequencies in cognition and mood states, in addition to their efficacy in targeting disease symptoms. We aim to highlight important points concerning stimulation parameters and try to address why there are often contradictory findings with regard to the outcomes of ABS.",
"title": ""
},
{
"docid": "9f9302cf8560b65bed7688f5339a865c",
"text": "Understanding short texts is crucial to many applications, but challenges abound. First, short texts do not always observe the syntax of a written language. As a result, traditional natural language processing tools, ranging from part-of-speech tagging to dependency parsing, cannot be easily applied. Second, short texts usually do not contain sufficient statistical signals to support many state-of-the-art approaches for text mining such as topic modeling. Third, short texts are more ambiguous and noisy, and are generated in an enormous volume, which further increases the difficulty to handle them. We argue that semantic knowledge is required in order to better understand short texts. In this work, we build a prototype system for short text understanding which exploits semantic knowledge provided by a well-known knowledgebase and automatically harvested from a web corpus. Our knowledge-intensive approaches disrupt traditional methods for tasks such as text segmentation, part-of-speech tagging, and concept labeling, in the sense that we focus on semantics in all these tasks. We conduct a comprehensive performance evaluation on real-life data. The results show that semantic knowledge is indispensable for short text understanding, and our knowledge-intensive approaches are both effective and efficient in discovering semantics of short texts.",
"title": ""
},
{
"docid": "aace50c8446403a9f72b24bce1e88c30",
"text": "This paper presents a model-driven approach to the development of web applications based on the Ubiquitous Web Application (UWA) design framework, the Model-View-Controller (MVC) architectural pattern and the JavaServer Faces technology. The approach combines a complete and robust methodology for the user-centered conceptual design of web applications with the MVC metaphor, which improves separation of business logic and data presentation. The proposed approach, by carrying the advantages of ModelDriven Development (MDD) and user-centered design, produces Web applications which are of high quality from the user's point of view and easier to maintain and evolve.",
"title": ""
},
{
"docid": "e5b8368f13bf0f5e1969910d1ef81ac4",
"text": "BACKGROUND\nIn girls who present with vaginal trauma, sexual abuse is often the primary diagnosis. The differential diagnosis must include patterns and the mechanism of injury that differentiate accidental injuries from inflicted trauma.\n\n\nCASE\nA 7-year-old prepubertal girl presented to the emergency department with genital bleeding after a serious accidental impaling injury from inline skating. After rapid abduction of the legs and a fall onto the blade of an inline skate this child incurred an impaling genital injury consistent with an accidental mechanism. The dramatic genital injuries when repaired healed with almost imperceptible residual evidence of previous trauma.\n\n\nSUMMARY AND CONCLUSION\nTo our knowledge, this case report represents the first in the medical literature of an impaling vaginal trauma from an inline skate and describes its clinical and surgical management.",
"title": ""
},
{
"docid": "d55aae728991060ed4ba1f9a6b59e2fe",
"text": "Evolutionary algorithms have become robust tool in data processing and modeling of dynamic, complex and non-linear processes due to their flexible mathematical structure to yield optimal results even with imprecise, ambiguity and noise at its input. The study investigates evolutionary algorithms for solving Sudoku task. Various hybrids are presented here as veritable algorithm for computing dynamic and discrete states in multipoint search in CSPs optimization with application areas to include image and video analysis, communication and network design/reconstruction, control, OS resource allocation and scheduling, multiprocessor load balancing, parallel processing, medicine, finance, security and military, fault diagnosis/recovery, cloud and clustering computing to mention a few. Solution space representation and fitness functions (as common to all algorithms) were discussed. For support and confidence model adopted π1=0.2 and π2=0.8 respectively yields better convergence rates – as other suggested value combinations led to either a slower or non-convergence. CGA found an optimal solution in 32 seconds after 188 iterations in 25runs; while GSAGA found its optimal solution in 18seconds after 402 iterations with a fitness progression achieved in 25runs and consequently, GASA found an optimal solution 2.112seconds after 391 iterations with fitness progression after 25runs respectively.",
"title": ""
},
{
"docid": "063287a98a5a45bc8e38f8f8c193990e",
"text": "This paper investigates the relationship between the contextual factors related to the firm’s decision-maker and the process of international strategic decision-making. The analysis has been conducted focusing on small and medium-sized enterprises (SME). Data for the research came from 111 usable responses to a survey on a sample of SME decision-makers in international field. The results of regression analysis indicate that the context variables, both internal and external, exerted more influence on international strategic decision making process than the decision-maker personality characteristics. DOI: 10.4018/ijabe.2013040101 2 International Journal of Applied Behavioral Economics, 2(2), 1-22, April-June 2013 Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. The purpose of this paper is to reverse this trend and to explore the different dimensions of SMEs’ strategic decision-making process in international decisions and, within these dimensions, we want to understand if are related to the decision-maker characteristics and also to broader contextual factors characteristics. The paper is organized as follows. In the second section the concepts of strategic decision-making process and factors influencing international SDMP are approached. Next, the research methodology, findings analysis and discussion will be presented. Finally, conclusions, limitations of the study and suggestions for future research are explored. THEORETICAL BACKGROUND Strategic Decision-Making Process The process of making strategic decisions has emerged as one of the most important themes of strategy research over the last two decades (Papadakis, 2006; Papadakis & Barwise, 2002). According to Harrison (1996), the SMDP can be defined as a combination of the concepts of strategic gap and management decision making process, with the former “determined by comparing the organization’s inherent capabilities with the opportunities and threats in its external environment”, while the latter is composed by a set of decision-making functions logically connected, that begins with the setting of managerial objective, followed by the search for information to develop a set of alternatives, that are consecutively compared and evaluated, and selected. Afterward, the selected alternative is implemented and, finally, it is subjected to follow-up and control. Other authors (Fredrickson, 1984; Mintzberg, Raisinghani, & Theoret, 1976) developed several models of strategic decision-making process since 1970, mainly based on the number of stages (Nooraie, 2008; Nutt, 2008). Although different researches investigated SDMP with specific reference to either small firms (Brouthers, et al., 1998; Gibcus, Vermeulen, & Jong, 2009; Huang, 2009; Jocumsen, 2004), or internationalization process (Aharoni, Tihanyi, & Connelly, 2011; Dimitratos, et al., 2011; Nielsen & Nielsen, 2011), there is a lack of studies that examine the SDMP in both perspectives. In this study we decided to mainly follow the SDMP defined by Harrison (1996) adapted to the international arena and particularly referred to market development decisions. Thus, for the definition of objectives (first phase) we refer to those in international field, for search for information, development and comparison of alternatives related to foreign markets (second phase) we refer to the systematic International Market Selection (IMS), and to the Entry Mode Selection (EMS) methodologies. For the implementation of the selected alternative (third phase) we mainly mean the entering in a particular foreign market with a specific entry mode, and finally, for follow-up and control (fourth phase) we refer to the control and evaluation of international activities. Dimensions of the Strategic Decision-Making Process Several authors attempted to implement a set of dimensions in approaching strategic process characteristics, and the most adopted are: • Rationality; • Formalization; • Hierarchical Decentralization and lateral communication; • Political Behavior.",
"title": ""
},
{
"docid": "ceb725186e5312601091157769c07b5f",
"text": "Much of the focus in the design of deep neural networks has been on improving accuracy, leading to more powerful yet highly complex network architectures that are difficult to deploy in practical scenarios, particularly on edge devices such as mobile and other consumer devices, given their high computational and memory requirements. As a result, there has been a recent interest in the design of quantitative metrics for evaluating deep neural networks that accounts for more than just model accuracy as the sole indicator of network performance. In this study, we continue the conversation towards universal metrics for evaluating the performance of deep neural networks for practical usage. In particular, we propose a new balanced metric called NetScore, which is designed specifically to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network. In what is one of the largest comparative analysis between deep neural networks in literature, the NetScore metric, the top-1 accuracy metric, and the popular information density metric were compared across a diverse set of 50 different deep convolutional neural networks for image classification on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC 2012) dataset. The evaluation results across these three metrics for this diverse set of networks are presented in this study to act as a reference guide for practitioners in the field. The proposed NetScore metric, along with the other tested metrics, are by no means perfect, but the hope is to push the conversation towards better universal metrics for evaluating deep neural networks for use in practical scenarios to help guide practitioners in model design.",
"title": ""
},
{
"docid": "a91a57326a2d961e24d13b844a3556cf",
"text": "This paper describes an interactive and adaptive streaming architecture that exploits temporal concatenation of H.264/AVC video bit-streams to dynamically adapt to both user commands and network conditions. The architecture has been designed to improve the viewing experience when accessing video content through individual and potentially bandwidth constrained connections. On the one hand, the user commands typically gives the client the opportunity to select interactively a preferred version among the multiple video clips that are made available to render the scene, e.g. using different view angles, or zoomed-in and slowmotion factors. On the other hand, the adaptation to the network bandwidth ensures effective management of the client buffer, which appears to be fundamental to reduce the client-server interaction latency, while maximizing video quality and preventing buffer underflow. In addition to user interaction and network adaptation, the deployment of fully autonomous infrastructures for interactive content distribution also requires the development of automatic versioning methods. Hence, the paper also surveys a number of approaches proposed for this purpose in surveillance and sport event contexts. Both objective metrics and subjective experiments are exploited to assess our system.",
"title": ""
},
{
"docid": "1d1eeb2f5a16fd8e1deed16a5839505b",
"text": "Searchable symmetric encryption (SSE) is a widely popular cryptographic technique that supports the search functionality over encrypted data on the cloud. Despite the usefulness, however, most of existing SSE schemes leak the search pattern, from which an adversary is able to tell whether two queries are for the same keyword. In recent years, it has been shown that the search pattern leakage can be exploited to launch attacks to compromise the confidentiality of the client’s queried keywords. In this paper, we present a new SSE scheme which enables the client to search encrypted cloud data without disclosing the search pattern. Our scheme uniquely bridges together the advanced cryptographic techniques of chameleon hashing and indistinguishability obfuscation. In our scheme, the secure search tokens for plaintext keywords are generated in a randomized manner, so it is infeasible to tell whether the underlying plaintext keywords are the same given two secure search tokens. In this way, our scheme well avoids using deterministic secure search tokens, which is the root cause of the search pattern leakage. We provide rigorous security proofs to justify the security strengths of our scheme. In addition, we also conduct extensive experiments to demonstrate the performance. Although our scheme for the time being is not immediately applicable due to the current inefficiency of indistinguishability obfuscation, we are aware that research endeavors on making indistinguishability obfuscation practical is actively ongoing and the practical efficiency improvement of indistinguishability obfuscation will directly lead to the applicability of our scheme. Our paper is a new attempt that pushes forward the research on SSE with concealed search pattern.",
"title": ""
},
{
"docid": "53c0564d82737d51ca9b7ea96a624be4",
"text": "In part 1 of this article, an occupational therapy model of practice for children with attention deficit hyperactivity disorder (ADHD) was described (Chu and Reynolds 2007). It addressed some specific areas of human functioning related to children with ADHD in order to guide the practice of occupational therapy. The model provides an approach to identifying and communicating occupational performance difficulties in relation to the interaction between the child, the environment and the demands of the task. A family-centred occupational therapy assessment and treatment package based on the model was outlined. The delivery of the package was underpinned by the principles of the family-centred care approach. Part 2 of this two-part article reports on a multicentre study, which was designed to evaluate the effectiveness and acceptability of the proposed assessment and treatment package and thereby to offer some validation of the delineation model. It is important to note that no treatment has yet been proved to ‘cure’ the condition of ADHD or to produce any enduring effects in affected children once the treatment is withdrawn. So far, the only empirically validated treatments for children with ADHD with substantial research evidence are psychostimulant medication, behavioural and educational management, and combined medication and behavioural management (DuPaul and Barkley 1993, A family-centred occupational therapy assessment and treatment package for children with attention deficit hyperactivity disorder (ADHD) was evaluated. The package involves a multidimensional evaluation and a multifaceted intervention, which are aimed at achieving a goodness-of-fit between the child, the task demands and the environment in which the child carries out the task. The package lasts for 3 months, with 12 weekly contacts with the child, parents and teacher. A multicentre study was carried out, with 20 occupational therapists participating. Following a 3-day training course, they implemented the package and supplied the data that they had collected from 20 children. The outcomes were assessed using the ADHD Rating Scales, pre-intervention and post-intervention. The results showed behavioural improvement in the majority of the children. The Measure of Processes of Care – 20-item version (MPOC-20) provided data on the parents’ perceptions of the family-centredness of the package and also showed positive ratings. The results offer some support for the package and the guiding model of practice, but caution should be exercised in generalising the results because of the small sample size, lack of randomisation, absence of a control group and potential experimenter effects from the research therapists. A larger-scale randomised controlled trial should be carried out to evaluate the efficacy of an improved package.",
"title": ""
},
{
"docid": "176386fd6f456d818d7ebf81f65d5030",
"text": "Event-driven architecture is gaining momentum in research and application areas as it promises enhanced responsiveness and asynchronous communication. The combination of event-driven and service-oriented architectural paradigms and web service technologies provide a viable possibility to achieve these promises. This paper outlines an architectural design and accompanying implementation technologies for its realization as a web services-based event-driven SOA.",
"title": ""
},
{
"docid": "ad2d21232d8a9af42ea7339574739eb3",
"text": "Majority of CNN architecture design is aimed at achieving high accuracy in public benchmarks by increasing the complexity. Typically, they are over-specified by a large margin and can be optimized by a factor of 10-100x with only a small reduction in accuracy. In spite of the increase in computational power of embedded systems, these networks are still not suitable for embedded deployment. There is a large need to optimize for hardware and reduce the size of the network by orders of magnitude for computer vision applications. This has led to a growing community which is focused on designing efficient networks. However, CNN architectures are evolving rapidly and efficient architectures seem to lag behind. There is also a gap in understanding the hardware architecture details and incorporating it into the network design. The motivation of this paper is to systematically summarize efficient design techniques and provide guidelines for an application developer. We also perform a case study by benchmarking various semantic segmentation algorithms for autonomous driving.",
"title": ""
},
{
"docid": "cae2b62afbecedc995612ed3a710e9d9",
"text": "Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off between the deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economicbased systems for peer-to-peer distributed computing by developing users’ quality-of-service requirements driven scheduling strategies and algorithms. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep applications.",
"title": ""
},
{
"docid": "fe842f2857bf3a60166c8f52e769585a",
"text": "We study the problem of explaining a rich class of behavioral properties of deep neural networks. Distinctively, our influence-directed explanations approach this problem by peering inside the network to identify neurons with high influence on a quantity and distribution of interest, using an axiomatically-justified influence measure, and then providing an interpretation for the concepts these neurons represent. We evaluate our approach by demonstrating a number of its unique capabilities on convolutional neural networks trained on ImageNet. Our evaluation demonstrates that influence-directed explanations (1) identify influential concepts that generalize across instances, (2) can be used to extract the “essence” of what the network learned about a class, and (3) isolate individual features the network uses to make decisions and distinguish related classes.",
"title": ""
},
{
"docid": "43bfbebda8dcb788057e1c98b7fccea6",
"text": "Der Beitrag stellt mit Quasar Enterprise einen durchgängigen, serviceorientierten Ansatz zur Gestaltung großer Anwendungslandschaften vor. Er verwendet ein Architektur-Framework zur Strukturierung der methodischen Schritte und führt ein Domänenmodell zur Präzisierung der Begrifflichkeiten und Entwicklungsartefakte ein. Die dargestellten methodischen Bausteine und Richtlinien beruhen auf langjährigen Erfahrungen in der industriellen Softwareentwicklung. 1 Motivation und Hintergrund sd&m beschäftigt sich seit seiner Gründung vor 25 Jahren mit dem Bau von individuellen Anwendungssystemen. Als konsolidierte Grundlage der Arbeit in diesem Bereich wurde Quasar (Quality Software Architecture) entwickelt – die sd&m StandardArchitektur für betriebliche Informationssysteme [Si04]. Quasar dient sd&m als Referenz für seine Disziplin des Baus einzelner Anwendungen. Seit einigen Jahren beschäftigt sich sd&m im Auftrag seiner Kunden mehr und mehr mit Fragestellungen auf der Ebene ganzer Anwendungslandschaften. Das Spektrum reicht von IT-Beratung zur Unternehmensarchitektur, über die Systemintegration querschnittlicher technischer, aber auch dedizierter fachlicher COTS-Produkte bis hin zum Bau einzelner großer Anwendungssysteme auf eine Art und Weise, dass eine perfekte Passung in eine moderne Anwendungslandschaft gegeben ist. Zur Abdeckung dieses breiten Spektrums an Aufgaben wurde eine neue Disziplin zur Gestaltung von Anwendungslandschaften benötigt. sd&m entwickelte hierzu eine neue Referenz – Quasar Enterprise – ein Quasar auf Unternehmensebene.",
"title": ""
},
{
"docid": "d35c176cfe5c8296862513c26f0fdffa",
"text": "Vertical scar mammaplasty, first described by Lötsch in 1923 and Dartigues in 1924 for mastopexy, was extended later to breast reduction by Arié in 1957. It was otherwise lost to surgical history until Lassus began experimenting with it in 1964. It then was extended by Marchac and de Olarte, finally to be popularized by Lejour. Despite initial skepticism, vertical reduction mammaplasty is becoming increasingly popular in recent years because it best incorporates the two concepts of minimal scarring and a satisfactory breast shape. At the moment, vertical scar techniques seem to be more popular in Europe than in the United States. A recent survey, however, has demonstrated that even in the United States, it has surpassed the rate of inverted T-scar breast reductions. The technique, however, is not without major drawbacks, such as long vertical scars extending below the inframammary crease and excessive skin gathering and “dog-ear” at the lower end of the scar that may require long periods for resolution, causing extreme distress to patients and surgeons alike. Efforts are being made to minimize these complications and make the procedure more user-friendly either by modifying it or by replacing it with an alternative that retains the same advantages. Although conceptually opposed to the standard vertical design, the circumvertical modification probably is the most important maneuver for shortening vertical scars. Residual dog-ears often are excised, resulting in a short transverse scar (inverted T- or L-scar). The authors describe limited subdermal undermining of the skin at the inferior edge of the vertical incisions with liposculpture of the inframammary crease, avoiding scar extension altogether. Simplified circumvertical drawing that uses the familiar Wise pattern also is described.",
"title": ""
},
{
"docid": "19863150313643b977f72452bb5a8a69",
"text": "Important research effort has been devoted to the topic of optimal planning of distribution systems. However, in general it has been mostly referred to the design of the primary network, with very modest considerations to the effect of the secondary network in the planning and future operation of the complete grid. Relatively little attention has been paid to the optimization of the secondary grid and to its effect on the optimality of the design of the complete electrical system, although the investment and operation costs of the secondary grid represent an important portion of the total costs. Appropriate design procedures have been proposed separately for both the primary and the secondary grid; however, in general, both planning problems have been presented and treated as different-almost isolated-problems, setting aside with this approximation some important factors that couple both problems, such as the fact that they may share the right of way, use the same poles, etc., among other factors that strongly affect the calculation of the investment costs. The main purpose of this work is the development and initial testing of a model for the optimal planning of a distribution system that includes both the primary and the secondary grids, so that a single optimization problem is stated for the design of the integral primary-secondary distribution system that overcomes these simplifications. The mathematical model incorporates the variables that define both the primary as well as the secondary planning problems and consists of a mixed integer-linear programming problem that may be solved by means of any suitable algorithm. Results are presented of the application of the proposed integral design procedure using conventional mixed integer-linear programming techniques to a real case of a residential primary-secondary distribution system consisting of 75 electrical nodes.",
"title": ""
},
{
"docid": "96029f6daa55fff7a76ab9bd48ebe7b9",
"text": "According to the principle of compositionality, the meaning of a sentence is computed from the meaning of its parts and the way they are syntactically combined. In practice, however, the syntactic structure is computed by automatic parsers which are far-from-perfect and not tuned to the specifics of the task. Current recursive neural network (RNN) approaches for computing sentence meaning therefore run into a number of practical difficulties, including the need to carefully select a parser appropriate for the task, deciding how and to what extent syntactic context modifies the semantic composition function, as well as on how to transform parse trees to conform to the branching settings (typically, binary branching) of the RNN. This paper introduces a new model, the Forest Convolutional Network, that avoids all of these challenges, by taking a parse forest as input, rather than a single tree, and by allowing arbitrary branching factors. We report improvements over the state-of-the-art in sentiment analysis and question classification.",
"title": ""
}
] | scidocsrr |
91cefa0057de61a06d353ffeb8921304 | Compression By Induction of Hierarchical Grammars | [
{
"docid": "bbf581230ec60c2402651d51e3a37211",
"text": "The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.",
"title": ""
},
{
"docid": "105951b58d594fdb3a07e1adbb76dc5f",
"text": "The “Prediction by Partial Matching” (PPM) data compression algorithm developed by Cleary and Witten is capable of very high compression rates, encoding English text in as little as 2.2 bits/character. Here it is shown that the estimates made by Cleary and Witten of the resources required to implement the scheme can be revised to allow for a tractable and useful implementation. In particular, a variant is described that encodes and decodes at over 4 kbytes/s on a small workstation, and operates within a few hundred kilobytes of data space, but still obtains compression of about 2.4 bits/character on",
"title": ""
}
] | [
{
"docid": "a94d8b425aed0ade657aa1091015e529",
"text": "Generative models for source code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs. We present a novel model for this problem that uses a graph to represent the intermediate state of the generated output. Our model generates code by interleaving grammar-driven expansion steps with graph augmentation and neural message passing steps. An experimental evaluation shows that our new model can generate semantically meaningful expressions, outperforming a range of strong baselines.",
"title": ""
},
{
"docid": "149d76dfaa019b965965062645e4845d",
"text": "In this paper we provide a detailed and comprehensive survey of proposed approaches for network design, charting the evolution of models and techniques for the automatic planning of cellular wireless services. These problems present themselves as a trade-off between commitment to infrastructure and quality of service, and have become increasingly complex with the advent of more sophisticated protocols and wireless architectures. Consequently these problems are receiving increased attention from researchers in a variety of fields who adopt a wide range of models, assumptions and methodologies for problem solution. We seek to unify this dispersed and fragmented literature by charting the evolution of centralised planning for cellular systems.",
"title": ""
},
{
"docid": "a5f17126a90b45921f70439ff96a0091",
"text": "Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"title": ""
},
{
"docid": "e1ada58b1ae0e92f12d4fb049de5a4bb",
"text": "We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. We then provide a knowledge compilation map, which analyzes a large number of existing target compilation languages according to their succinctness and their polytime transformations and queries. We argue that such analysis is necessary for placing new compilation approaches within the context of existing ones. We also go beyond classical, flat target compilation languages based on CNF and DNF, and consider a richer, nested class based on directed acyclic graphs (such as OBDDs), which we show to include a relatively large number of target compilation languages.",
"title": ""
},
{
"docid": "2e32d668383eaaed096aa2e34a10d8e9",
"text": "Splicing and copy-move are two well known methods of passive image forgery. In this paper, splicing and copy-move forgery detection are performed simultaneously on the same database CASIA v1.0 and CASIA v2.0. Initially, a suspicious image is taken and features are extracted through BDCT and enhanced threshold method. The proposed technique decides whether the given image is manipulated or not. If it is manipulated then support vector machine (SVM) classify that the given image is gone through splicing forgery or copy-move forgery. For copy-move detection, ZM-polar (Zernike Moment) is used to locate the duplicated regions in image. Experimental results depict the performance of the proposed method.",
"title": ""
},
{
"docid": "614cc9968370bffb32cf70f44c8f8688",
"text": "The abundance of event data in today’s information systems makes it possible to “confront” process models with the actual observed behavior. Process mining techniques use event logs to discover process models that describe the observed behavior, and to check conformance of process models by diagnosing deviations between models and reality. In many situations, it is desirable to mediate between a preexisting model and observed behavior. Hence, we would like to repair the model while improving the correspondence between model and log as much as possible. The approach presented in this article assigns predefined costs to repair actions (allowing inserting or skipping of activities). Given a maximum degree of change, we search for models that are optimal in terms of fitness—that is, the fraction of behavior in the log not possible according to the model is minimized. To compute fitness, we need to align the model and log, which can be time consuming. Hence, finding an optimal repair may be intractable. We propose different alternative approaches to speed up repair. The number of alignment computations can be reduced dramatically while still returning near-optimal repairs. The different approaches have been implemented using the process mining framework ProM and evaluated using real-life logs.",
"title": ""
},
{
"docid": "948b157586c75674e75bd50b96162861",
"text": "We propose a database design methodology for NoSQL systems. The approach is based on NoAM (NoSQL Abstract Model), a novel abstrac t d ta model for NoSQL databases, which exploits the commonalities of various N SQL systems and is used to specify a system-independent representatio n of the application data. This intermediate representation can be then implemented in target NoSQL databases, taking into account their specific features. Ov rall, the methodology aims at supporting scalability, performance, and consisten cy, as needed by next-generation web applications.",
"title": ""
},
{
"docid": "adf6ac64c2c1af405e9500ce1ea35cf2",
"text": "Mining detailed opinions buried in the vast amount of review text data is an important, yet quite challenging task with widespread applications in multiple domains. Latent Aspect Rating Analysis (LARA) refers to the task of inferring both opinion ratings on topical aspects (e.g., location, service of a hotel) and the relative weights reviewers have placed on each aspect based on review content and the associated overall ratings. A major limitation of previous work on LARA is the assumption of pre-specified aspects by keywords. However, the aspect information is not always available, and it may be difficult to pre-define appropriate aspects without a good knowledge about what aspects are actually commented on in the reviews.\n In this paper, we propose a unified generative model for LARA, which does not need pre-specified aspect keywords and simultaneously mines 1) latent topical aspects, 2) ratings on each identified aspect, and 3) weights placed on different aspects by a reviewer. Experiment results on two different review data sets demonstrate that the proposed model can effectively perform the Latent Aspect Rating Analysis task without the supervision of aspect keywords. Because of its generality, the proposed model can be applied to explore all kinds of opinionated text data containing overall sentiment judgments and support a wide range of interesting application tasks, such as aspect-based opinion summarization, personalized entity ranking and recommendation, and reviewer behavior analysis.",
"title": ""
},
{
"docid": "5e82e67ebb99cac1b3874bf08e03b550",
"text": "Nonsmooth nonnegative matrix factorization (nsNMF) is capable of producing more localized, less overlapped feature representations than other variants of NMF while keeping satisfactory fit to data. However, nsNMF as well as other existing NMF methods are incompetent to learn hierarchical features of complex data due to its shallow structure. To fill this gap, we propose a deep nsNMF method coined by the fact that it possesses a deeper architecture compared with standard nsNMF. The deep nsNMF not only gives part-based features due to the nonnegativity constraints but also creates higher level, more abstract features by combing lower level ones. The in-depth description of how deep architecture can help to efficiently discover abstract features in dnsNMF is presented, suggesting that the proposed model inherits the major advantages from both deep learning and NMF. Extensive experiments demonstrate the standout performance of the proposed method in clustering analysis.",
"title": ""
},
{
"docid": "fb80a9ad20947bee7ba23d585896b6e8",
"text": "This paper presents an intelligent streetlight management system based on LED lamps, designed to facilitate its deployment in existing facilities. The proposed approach, which is based on wireless communication technologies, will minimize the cost of investment of traditional wired systems, which always need civil engineering for burying of cable underground and consequently are more expensive than if the connection of the different nodes is made over the air. The deployed solution will be aware of their surrounding's environmental conditions, a fact that will be approached for the system intelligence in order to learn, and later, apply dynamic rules. The knowledge of real time illumination needs, in terms of instant use of the street in which it is installed, will also feed our system, with the objective of providing tangible solutions to reduce energy consumption according to the contextual needs, an exact calculation of energy consumption and reliable mechanisms for preventive maintenance of facilities.",
"title": ""
},
{
"docid": "2895400382c5c8358d83a3c16b89f83c",
"text": "The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. Here we describe locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of high dimensional data. The data, assumed to be sampled from an underlying manifold, are mapped into a single global coordinate system of lower dimensionality. The mapping is derived from the symmetries of locally linear reconstructions, and the actual computation of the embedding reduces to a sparse eigenvalue problem. Notably, the optimizations in LLE—though capable of generating highly nonlinear embeddings—are simple to implement, and they do not involve local minima. In this paper, we describe the implementation of the algorithm in detail and discuss several extensions that enhance its performance. We present results of the algorithm applied to data sampled from known manifolds, as well as to collections of images of faces, lips, and handwritten digits. These examples are used to provide extensive illustrations of the algorithm’s performance—both successes and failures—and to relate the algorithm to previous and ongoing work in nonlinear dimensionality reduction.",
"title": ""
},
{
"docid": "2c37ee67205320d54149a71be104c0e1",
"text": "This talk will review the mission, activities, and recommendations of the Blue Ribbon Panel on Cyberinfrastructure recently appointed by the leadership on the U.S. National Science Foundation (NSF). The NSF invests in people, ideas, and tools and in particular is a major investor in basic research to produce communication and information technology (ICT) as well as its use in supporting basic research and education in most all areas of science and engineering. The NSF through its Directorate for Computer and Information Science and Engineering (CISE) has provided substantial funding for high-end computing resources, initially by awards to five supercomputer centers and later through $70 M per year investments in two partnership alliances for advanced computation infrastructures centered at the University of Illinois and the University of California, San Diego. It has also invested in an array of complementary R&D initiatives in networking, middleware, digital libraries, collaboratories, computational and visualization science, and distributed terascale grid environments.",
"title": ""
},
{
"docid": "7e127a6f25e932a67f333679b0d99567",
"text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.",
"title": ""
},
{
"docid": "46fa91ce587d094441466a7cbe5c5f07",
"text": "Automatic facial expression analysis is an interesting and challenging problem which impacts important applications in many areas such as human-computer interaction and data-driven animation. Deriving effective facial representative features from face images is a vital step towards successful expression recognition. In this paper, we evaluate facial representation based on statistical local features called Local Binary Patterns (LBP) for facial expression recognition. Simulation results illustrate that LBP features are effective and efficient for facial expression recognition. A real-time implementation of the proposed approach is also demonstrated which can recognize expressions accurately at the rate of 4.8 frames per second.",
"title": ""
},
{
"docid": "07f0996fe2dcd3b52931b0aa09ac6f45",
"text": "We are interested in the situation where we have two or more re presentations of an underlying phenomenon. In particular we ar e interested in the scenario where the representation are complementary. This implies that a single individual representation is not sufficient to fully dis criminate a specific instance of the underlying phenomenon, it also means that each r presentation is an ambiguous representation of the other complementary spa ce . In this paper we present a latent variable model capable of consolidating multiple complementary representations. Our method extends canonical cor relation analysis by introducing additional latent spaces that are specific to th e different representations, thereby explaining the full variance of the observat ions. These additional spaces, explaining representation specific variance, sepa rat ly model the variance in a representation ambiguous to the other. We develop a spec tral algorithm for fast computation of the embeddings and a probabilistic mode l (based on Gaussian processes) for validation and inference. The proposed mode l has several potential application areas, we demonstrate its use for multi-modal r egression on a benchmark human pose estimation data set.",
"title": ""
},
{
"docid": "5d8f33b7f28e6a8d25d7a02c1f081af1",
"text": "Background The life sciences, biomedicine and health care are increasingly turning into a data intensive science [2-4]. Particularly in bioinformatics and computational biology we face not only increased volume and a diversity of highly complex, multi-dimensional and often weaklystructured and noisy data [5-8], but also the growing need for integrative analysis and modeling [9-14]. Due to the increasing trend towards personalized and precision medicine (P4 medicine: Predictive, Preventive, Participatory, Personalized [15]), biomedical data today results from various sources in different structural dimensions, ranging from the microscopic world, and in particular from the omics world (e.g., from genomics, proteomics, metabolomics, lipidomics, transcriptomics, epigenetics, microbiomics, fluxomics, phenomics, etc.) to the macroscopic world (e.g., disease spreading data of populations in public health informatics), see Figure 1[16]. Just for rapid orientation in terms of size: the Glucose molecule has a size of 900 pm = 900× 10−12m and the Carbon atom approx. 300 pm . A hepatitis virus is relatively large with 45nm = 45× 10−9m and the X-Chromosome much bigger with 7μm = 7× 10−6m . We produce most of the “Big Data” in the omics world, we estimate many Terabytes ( 1TB = 1× 10 Byte = 1000 GByte) of genomics data in each individual, consequently, the fusion of these with Petabytes of proteomics data for personalized medicine results in Exabytes of data (1 EB = 1× 1018 Byte ). Last but not least, this “natural” data is then fused together with “produced” data, e.g., the unstructured information (text) in the patient records, wellness data, the data from physiological sensors, laboratory data etc. these data are also rapidly increasing in size and complexity. Besides the problem of heterogeneous and distributed data, we are confronted with noisy, missing and inconsistent data. This leaves a large gap between the available “dirty” data [17] and the machinery to effectively process the data for the application purposes; moreover, the procedures of data integration and information extraction may themselves introduce errors and artifacts in the data [18]. Although, one may argue that “Big Data” is a buzz word, systematic and comprehensive exploration of all these data is often seen as the fourth paradigm in the investigation of nature after empiricism, theory and computation [19], and provides a mechanism for data driven hypotheses generation, optimized experiment planning, precision medicine and evidence-based medicine. The challenge is not only to extract meaningful information from this data, but to gain knowledge, to discover previously unknown insight, look for patterns, and to make sense of the data [20], [21]. Many different approaches, including statistical and graph theoretical methods, data mining, and machine learning methods, have been applied in the past however with partly unsatisfactory success [22,23] especially in terms of performance [24]. The grand challenge is to make data useful to and useable by the end user [25]. Maybe, the key challenge is interaction, due to the fact that it is the human end user who possesses the problem solving intelligence [26], hence the ability to ask intelligent questions about the data. The problem in the life sciences is that (biomedical) data models are characterized by significant complexity [27], [28], making manual analysis by the end users difficult and often impossible [29]. At the same time, human * Correspondence: a.holzinger@tugraz.at Research Unit Human-Computer Interaction, Austrian IBM Watson Think Group, Institute for Medical Informatics, Statistics & Documentation, Medical University Graz, Austria Full list of author information is available at the end of the article Holzinger et al. BMC Bioinformatics 2014, 15(Suppl 6):I1 http://www.biomedcentral.com/1471-2105/15/S6/I1",
"title": ""
},
{
"docid": "37af5d5ee2e4f6b94aa5c93d12f98017",
"text": "This paper reviews prior research in management accounting innovations covering the period 1926-2008. Management accounting innovations refer to the adoption of “newer” or modern forms of management accounting systems such as activity-based costing, activity-based management, time-driven activity-based costing, target costing, and balanced scorecards. Although some prior reviews, covering the period until 2000, place emphasis on modern management accounting techniques, however, we believe that the time gap between 2000 and 2008 could entail many new or innovative accounting issues. We find that research in management accounting innovations has intensified during the period 2000-2008, with the main focus has been on explaining various factors associated with the implementation and the outcome of an innovation. In addition, research in management accounting innovations indicates the dominant use of sociological-based theories and increasing use of field studies. We suggest some directions for future research pertaining to management accounting innovations.",
"title": ""
},
{
"docid": "7c86594614a6bd434ee4e749eb661cee",
"text": "The ACT-R system is a general system for modeling a wide range of higher level cognitive processes. Recently, it has been embellished with a theory of how its higher level processes interact with a visual interface. This includes a theory of how visual attention can move across the screen, encoding information into a form that can be processed by ACT-R. This system is applied to modeling several classic phenomena in the literature that depend on the speed and selectivity with which visual attention can move across a visual display. ACT-R is capable of interacting with the same computer screens that subjects do and, as such, is well suited to provide a model for tasks involving human-computer interaction. In this article, we discuss a demonstration of ACT-R's application to menu selection and show that the ACT-R theory makes unique predictions, without estimating any parameters, about the time to search a menu. These predictions are confirmed. John R. Anderson is a cognitive scientist with an interest in cognitive architectures and intelligent tutoring systems; he is a Professor of Psychology and Computer Science at Carnegie Mellon University. Michael Matessa is a graduate student studying cognitive psychology at Carnegie Mellon University; his interests include cognitive architectures and modeling the acquisition of information from the environment. Christian Lebiere is a computer scientist with an interest in intelligent architectures; he is a Research Programmer in the Department of Psycholo and a graduate student in the School of Computer Science at Carnegie Me1 By on University. 440 ANDERSON, MATESSA, LEBIERE",
"title": ""
},
{
"docid": "09c5af6e117376657f44afc3a2125293",
"text": "One of the main disturbances in a frequency-modulated continuous wave radar system for range measurement is nonlinearity in the frequency ramp. The intermediate frequency (IF) signal and consequently the target range accuracy are dependent on the type of the nonlinearity present in the frequency ramp. Moreover, the type of frequency ramp nonlinearity cannot be directly specified, which makes the problem even more challenging. In this paper, the frequency ramp nonlinearity is investigated with the modified short-time Fourier transform method by using the short-time Chirp-Z transform method with high accuracy. The random and periodic nonlinearities are characterized and their sources are identified as phase noise and spurious. These types of frequency deviations are intentionally increased, and their influence on the linearity and the IF-signal is investigated. The dependence of target range estimation accuracy on the frequency ramp nonlinearity, phase noise, spurious, and signal-to-noise ratio in the IF-signal are described analytically and are verified on the basis of measurements.",
"title": ""
},
{
"docid": "b09c438933e0c9300e19f035eb0e9305",
"text": "A Reverse Conducting IGBT (RC-IGBT) is a promising device to reduce a size and cost of the power module thanks to the integration of IGBT and FWD into a single chip. However, it is difficult to achieve well-balanced performance between IGBT and FWD. Indeed, the total inverter loss of the conventional RC-IGBT was not so small as the individual IGBT and FWD pair. To minimize the loss, the most important key is the improvement of reverse recovery characteristics of FWD. We carefully extracted five effective parameters to improve the FWD characteristics, and investigated the impact of these parameters by using simulation and experiments. Finally, optimizing these parameters, we succeeded in fabricating the second-generation 600V class RC-IGBT with a smaller FWD loss than the first-generation RC-IGBT.",
"title": ""
}
] | scidocsrr |
26783be6c02049e3b4df3b373534313e | Value Chain Creation in Business Analytics | [
{
"docid": "e964a46706179a92b775307166a64c8a",
"text": "I general, perceptions of information systems (IS) success have been investigated within two primary research streams—the user satisfaction literature and the technology acceptance literature. These two approaches have been developed in parallel and have not been reconciled or integrated. This paper develops an integrated research model that distinguishes beliefs and attitudes about the system (i.e., object-based beliefs and attitudes) from beliefs and attitudes about using the system (i.e., behavioral beliefs and attitudes) to build the theoretical logic that links the user satisfaction and technology acceptance literature. The model is then tested using a sample of 465 users from seven different organizations who completed a survey regarding their use of data warehousing software. The proposed model was supported, providing preliminary evidence that the two perspectives can and should be integrated. The integrated model helps build the bridge from design and implementation decisions to system characteristics (a core strength of the user satisfaction literature) to the prediction of usage (a core strength of the technology acceptance literature).",
"title": ""
},
{
"docid": "77bc1c8c80f756845b87428382e8fd91",
"text": "Previous research has proposed different types for and contingency factors affecting information technology governance. Yet, in spite of this valuable work, it is still unclear through what mechanisms IT governance affects organizational performance. We make a detailed argument for the mediation of strategic alignment in this process. Strategic alignment remains a top priority for business and IT executives, but theory-based empirical research on the relative importance of the factors affecting strategic alignment is still lagging. By consolidating strategic alignment and IT governance models, this research proposes a nomological model showing how organizational value is created through IT governance mechanisms. Our research model draws upon the resource-based view of the firm and provides guidance on how strategic alignment can mediate the effectiveness of IT governance on organizational performance. As such, it contributes to the knowledge bases of both alignment and IT governance literatures. Using dyadic data collected from 131 Taiwanese companies (cross-validated with archival data from 72 firms), we uncover a positive, significant, and impactful linkage between IT governance mechanisms and strategic alignment and, further, between strategic alignment and organizational performance. We also show that the effect of IT governance mechanisms on organizational performance is fully mediated by strategic alignment. Besides making contributions to construct and measure items in this domain, this research contributes to the theory base by integrating and extending the literature on IT governance and strategic alignment, both of which have long been recognized as critical for achieving organizational goals.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] | [
{
"docid": "0c2e489edeac2c8ad5703eda644edfac",
"text": "Nowadays, more and more decision procedures are supported or even guided by automated processes. An important technique in this automation is data mining. In this chapter we study how such automatically generated decision support models may exhibit discriminatory behavior towards certain groups based upon, e.g., gender or ethnicity. Surprisingly, such behavior may even be observed when sensitive information is removed or suppressed and the whole procedure is guided by neutral arguments such as predictive accuracy only. The reason for this phenomenon is that most data mining methods are based upon assumptions that are not always satisfied in reality, namely, that the data is correct and represents the population well. In this chapter we discuss the implicit modeling assumptions made by most data mining algorithms and show situations in which they are not satisfied. Then we outline three realistic scenarios in which an unbiased process can lead to discriminatory models. The effects of the implicit assumptions not being fulfilled are illustrated by examples. The chapter concludes with an outline of the main challenges and problems to be solved.",
"title": ""
},
{
"docid": "6dd810d8a5180b49ded351f0acf135b8",
"text": "In classification problem, we assume that the samples around the class boundary are more likely to be incorrectly annotated than others, and propose boundaryconditional class noise (BCN). Based on the BCN assumption, we use unnormalized Gaussian and Laplace distributions to directly model how class noise is generated, in symmetric and asymmetric cases. In addition, we demonstrate that Logistic regression and Probit regression can also be reinterpreted from this class noise perspective, and compare them with the proposed models. The empirical study shows that, the proposed asymmetric models overall outperform the benchmark linear models, and the asymmetric Laplace-noise model achieves the best performance among all.",
"title": ""
},
{
"docid": "61160371b2a85f1b937105cc43d3c70d",
"text": "Regular expressions are extremely useful, because they allow us to work with text in terms of patterns. They are considered the most sophisticated means of performing operations such as string searching, manipulation, validation, and formatting in all applications that deal with text data. Character recognition problem scenarios in sequence analysis that are ideally suited for the application of regular expression algorithms. This paper describes a use of regular expressions in this problem domain, and demonstrates how the effective use of regular expressions that can serve to facilitate more efficient and more effective character recognition.",
"title": ""
},
{
"docid": "b7a4b6f6f3028923853649077c18dfa5",
"text": "The increasing ageing population around the world and the increased risk of falling among this demographic, challenges society and technology to find better ways to mitigate the occurrence of such costly and detrimental events as falls. The most common activity associated with falls is bed transfers; therefore, the most significant high risk activity. Several technological solutions exist for bed exiting detection using a variety of sensors which are attached to the body, bed or floor. However, lack of real life performance studies, technical limitations and acceptability are still key issues. In this research, we present and evaluate a novel method for mitigating the high falls risk associated with bed exits based on using an inexpensive, privacy preserving and passive sensor enabled RFID device. Our approach is based on a classification system built upon conditional random fields that requires no preprocessing of sensorial and RF metrics data extracted from an RFID platform. We evaluated our classification algorithm and the wearability of our sensor using elderly volunteers (66-86 y.o.). The results demonstrate the validity of our approach and the performance is an improvement on previous bed exit classification studies. The participants of the study also overwhelmingly agreed that the sensor was indeed wearable and presented no problems.",
"title": ""
},
{
"docid": "cbcf4ca356682ee9c09b87fa1cd26ba2",
"text": "The field of data analytics is currently going through a renaissance as a result of ever-increasing dataset sizes, the value of the models that can be trained from those datasets, and a surge in flexible, distributed programming models. In particular, the Apache Hadoop and Spark programming systems, as well as their supporting projects (e.g. HDFS, SparkSQL), have greatly simplified the analysis and transformation of datasets whose size exceeds the capacity of a single machine. While these programming models facilitate the use of distributed systems to analyze large datasets, they have been plagued by performance issues. The I/O performance bottlenecks of Hadoop are partially responsible for the creation of Spark. Performance bottlenecks in Spark due to the JVM object model, garbage collection, interpreted/managed execution, and other abstraction layers are responsible for the creation of additional optimization layers, such as Project Tungsten. Indeed, the Project Tungsten issue tracker states that the \"majority of Spark workloads are not bottlenecked by I/O or network, but rather CPU and memory\".\n In this work, we address the CPU and memory performance bottlenecks that exist in Apache Spark by accelerating user-written computational kernels using accelerators. We refer to our approach as Spark With Accelerated Tasks (SWAT). SWAT is an accelerated data analytics (ADA) framework that enables programmers to natively execute Spark applications on high performance hardware platforms with co-processors, while continuing to write their applications in a JVM-based language like Java or Scala. Runtime code generation creates OpenCL kernels from JVM bytecode, which are then executed on OpenCL accelerators. In our work we emphasize 1) full compatibility with a modern, existing, and accepted data analytics platform, 2) an asynchronous, event-driven, and resource-aware runtime, 3) multi-GPU memory management and caching, and 4) ease-of-use and programmability. Our performance evaluation demonstrates up to 3.24x overall application speedup relative to Spark across six machine learning benchmarks, with a detailed investigation of these performance improvements.",
"title": ""
},
{
"docid": "aa23ee34f7117f6d5f83374b8623f4dc",
"text": "PURPOSE OF REVIEW\nThe notion that play may facilitate learning has long been touted. Here, we review how video game play may be leveraged for enhancing attentional control, allowing greater cognitive flexibility and learning and in turn new routes to better address developmental disorders.\n\n\nRECENT FINDINGS\nVideo games, initially developed for entertainment, appear to enhance the behavior in domains as varied as perception, attention, task switching, or mental rotation. This surprisingly wide transfer may be mediated by enhanced attentional control, allowing increased signal-to-noise ratio and thus more informed decisions.\n\n\nSUMMARY\nThe possibility of enhancing attentional control through targeted interventions, be it computerized training or self-regulation techniques, is now well established. Embedding such training in video game play is appealing, given the astounding amount of time spent by children and adults worldwide with this media. It holds the promise of increasing compliance in patients and motivation in school children, and of enhancing the use of positive impact games. Yet for all the promises, existing research indicates that not all games are created equal: a better understanding of the game play elements that foster attention and learning as well as of the strategies developed by the players is needed. Computational models from machine learning or developmental robotics provide a rich theoretical framework to develop this work further and address its impact on developmental disorders.",
"title": ""
},
{
"docid": "461d47e03c5740d744dd3e3cbb1e2216",
"text": "The Multidimensional Personality Questionnaire (MPQ; A. Tellegen, 1982, in press) provides for a comprehensive analysis of personality at both the lower order trait and broader structural levels. Its higher order dimensions of Positive Emotionality, Negative Emotionality, and Constraint embody affect and temperament constructs, which have been conceptualized in psychobiological terms. The MPQ thus holds considerable potential as a structural framework for investigating personality across varying levels of analysis, and this potential would be enhanced by the availability of an abbreviated version. This article describes efforts to develop and validate a brief (155-item) form, the MPQ-BF. Success was evidenced by uniformly high correlations between the brief- and full-form trait scales and consistency of higher order structures. The MPQ-BF is recommended as a tool for investigating the genetic, neurobiological, and psychological substrates of personality.",
"title": ""
},
{
"docid": "f25f7ae3fc614a236f3948d68f488c5b",
"text": "Internet of Things (IoT) has gained substantial attention recently and play a significant role in smart city application deployments. A number of such smart city applications depend on sensor fusion capabilities in the cloud from diverse data sources. We introduce the concept of IoT and present in detail ten different parameters that govern our sensor data fusion evaluation framework. We then evaluate the current state-of-the art in sensor data fusion against our sensor data fusion framework. Our main goal is to examine and survey different sensor data fusion research efforts based on our evaluation framework. The major open research issues related to sensor data fusion are also presented.",
"title": ""
},
{
"docid": "5896289f0a9b788ef722756953a580ce",
"text": "Biodiesel, defined as the mono-alkyl esters of vegetable oils or animal fats, is an balternativeQ diesel fuel that is becoming accepted in a steadily growing number of countries around the world. Since the source of biodiesel varies with the location and other sources such as recycled oils are continuously gaining interest, it is important to possess data on how the various fatty acid profiles of the different sources can influence biodiesel fuel properties. The properties of the various individual fatty esters that comprise biodiesel determine the overall fuel properties of the biodiesel fuel. In turn, the properties of the various fatty esters are determined by the structural features of the fatty acid and the alcohol moieties that comprise a fatty ester. Structural features that influence the physical and fuel properties of a fatty ester molecule are chain length, degree of unsaturation, and branching of the chain. Important fuel properties of biodiesel that are influenced by the fatty acid profile and, in turn, by the structural features of the various fatty esters are cetane number and ultimately exhaust emissions, heat of combustion, cold flow, oxidative stability, viscosity, and lubricity. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "e6d7399b88c57aebca0a43662d7fd855",
"text": "UNLABELLED\nAlthough the brain relies on auditory information to calibrate vocal behavior, the neural substrates of vocal learning remain unclear. Here we demonstrate that lesions of the dopaminergic inputs to a basal ganglia nucleus in a songbird species (Bengalese finches, Lonchura striata var. domestica) greatly reduced the magnitude of vocal learning driven by disruptive auditory feedback in a negative reinforcement task. These lesions produced no measureable effects on the quality of vocal performance or the amount of song produced. Our results suggest that dopaminergic inputs to the basal ganglia selectively mediate reinforcement-driven vocal plasticity. In contrast, dopaminergic lesions produced no measurable effects on the birds' ability to restore song acoustics to baseline following the cessation of reinforcement training, suggesting that different forms of vocal plasticity may use different neural mechanisms.\n\n\nSIGNIFICANCE STATEMENT\nDuring skill learning, the brain relies on sensory feedback to improve motor performance. However, the neural basis of sensorimotor learning is poorly understood. Here, we investigate the role of the neurotransmitter dopamine in regulating vocal learning in the Bengalese finch, a songbird with an extremely precise singing behavior that can nevertheless be reshaped dramatically by auditory feedback. Our findings show that reduction of dopamine inputs to a region of the songbird basal ganglia greatly impairs vocal learning but has no detectable effect on vocal performance. These results suggest a specific role for dopamine in regulating vocal plasticity.",
"title": ""
},
{
"docid": "30c5f12ecaec4f385c2be3bb8ef8eb1e",
"text": "Human has the ability to roughly estimate the distance and size of an object because of the stereo vision of human's eyes. In this project, we proposed to utilize stereo vision system to accurately measure the distance and size (height and width) of object in view. Object size identification is very useful in building systems or applications especially in autonomous system navigation. Many recent works have started to use multiple vision sensors or cameras for different type of application such as 3D image constructions, occlusion detection and etc. Multiple cameras system has becoming more popular since cameras are now very cheap and easy to deploy and utilize. The proposed measurement system consists of object detection on the stereo images and blob extraction and distance and size calculation and object identification. The system also employs a fast algorithm so that the measurement can be done in real-time. The object measurement using stereo camera is better than object detection using a single camera that was proposed in many previous research works. It is much easier to calibrate and can produce a more accurate results.",
"title": ""
},
{
"docid": "71757d1cee002bb235a591cf0d5aafd5",
"text": "There is an old Wall Street adage goes, ‘‘It takes volume to make price move”. The contemporaneous relation between trading volume and stock returns has been studied since stock markets were first opened. Recent researchers such as Wang and Chin [Wang, C. Y., & Chin S. T. (2004). Profitability of return and volume-based investment strategies in China’s stock market. Pacific-Basin Finace Journal, 12, 541–564], Hodgson et al. [Hodgson, A., Masih, A. M. M., & Masih, R. (2006). Futures trading volume as a determinant of prices in different momentum phases. International Review of Financial Analysis, 15, 68–85], and Ting [Ting, J. J. L. (2003). Causalities of the Taiwan stock market. Physica A, 324, 285–295] have found the correlation between stock volume and price in stock markets. To verify this saying, in this paper, we propose a dual-factor modified fuzzy time-series model, which take stock index and trading volume as forecasting factors to predict stock index. In empirical analysis, we employ the TAIEX (Taiwan stock exchange capitalization weighted stock index) and NASDAQ (National Association of Securities Dealers Automated Quotations) as experimental datasets and two multiplefactor models, Chen’s [Chen, S. M. (2000). Temperature prediction using fuzzy time-series. IEEE Transactions on Cybernetics, 30 (2), 263–275] and Huarng and Yu’s [Huarng, K. H., & Yu, H. K. (2005). A type 2 fuzzy time-series model for stock index forecasting. Physica A, 353, 445–462], as comparison models. The experimental results indicate that the proposed model outperforms the listing models and the employed factors, stock index and the volume technical indicator, VR(t), are effective in stock index forecasting. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "815feed9cce2344872c50da6ffb77093",
"text": "Over the last decade blogs became an important part of the Web, where people can announce anything that is on their mind. Due to their high popularity blogs have great potential to mine public opinions regarding products. Such knowledge is very valuable as it could be used to adjust marketing campaigns or advertisement of products accordingly. In this paper we investigate how the blogosphere can be used to predict the success of products in the domain of music and movies. We analyze and characterize the blogging behavior in both domains particularly around product releases, propose different methods for extracting characteristic features from the blogosphere, and show that our predictions correspond to the real world measures Sales Rank and box office revenue respectively.",
"title": ""
},
{
"docid": "4494d5b42c8daf6a45608159a748fd7d",
"text": "A number of recent papers have provided evidence that practical design questions about neural networks may be tackled theoretically by studying the behavior of random networks. However, until now the tools available for analyzing random neural networks have been relatively ad hoc. In this work, we show that the distribution of pre-activations in random neural networks can be exactly mapped onto lattice models in statistical physics. We argue that several previous investigations of stochastic networks actually studied a particular factorial approximation to the full lattice model. For random linear networks and random rectified linear networks we show that the corresponding lattice models in the wide network limit may be systematically approximated by a Gaussian distribution with covariance between the layers of the network. In each case, the approximate distribution can be diagonalized by Fourier transformation. We show that this approximation accurately describes the results of numerical simulations of wide random neural networks. Finally, we demonstrate that in each case the large scale behavior of the random networks can be approximated by an effective field theory.",
"title": ""
},
{
"docid": "ed35d80dd3af3acbe75e5122b2378756",
"text": "We present a system whereby the human voice may specify continuous control signals to manipulate a simulated 2D robotic arm and a real 3D robotic arm. Our goal is to move towards making accessible the manipulation of everyday objects to individuals with motor impairments. Using our system, we performed several studies using control style variants for both the 2D and 3D arms. Results show that it is indeed possible for a user to learn to effectively manipulate real-world objects with a robotic arm using only non-verbal voice as a control mechanism. Our results provide strong evidence that the further development of non-verbal voice controlled robotics and prosthetic limbs will be successful.",
"title": ""
},
{
"docid": "b99944ad31c5ad81d0e235c200a332b4",
"text": "This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question. Two methods are studied: an end-to-end, deep neural network that directly uses audio waveforms as input versus a pipelined approach that performs ASR (Automatic Speech Recognition) on the question, followed by text-based visual question answering. Furthermore, we investigate the robustness of both methods by injecting various levels of noise into the spoken question and find both methods to be tolerate noise at similar levels.",
"title": ""
},
{
"docid": "ec7c9fa71dcf32a3258ee8712ccb95c1",
"text": "Fuzzy graph is now a very important research area due to its wide application. Fuzzy multigraph and fuzzy planar graphs are two subclasses of fuzzy graph theory. In this paper, we define both of these graphs and studied a lot of properties. A very close association of fuzzy planar graph is fuzzy dual graph. This is also defined and studied several properties. The relation between fuzzy planar graph and fuzzy dual graph is also established.",
"title": ""
},
{
"docid": "17162eac4f1292e4c2ad7ef83af803f1",
"text": "Recent years have witnessed significant progresses in deep Reinforcement Learning (RL). Empowered with large scale neural networks, carefully designed architectures, novel training algorithms and massively parallel computing devices, researchers are able to attack many challenging RL problems. However, in machine learning, more training power comes with a potential risk of more overfitting. As deep RL techniques are being applied to critical problems such as healthcare and finance, it is important to understand the generalization behaviors of the trained agents. In this paper, we conduct a systematic study of standard RL agents and find that they could overfit in various ways. Moreover, overfitting could happen “robustly”: commonly used techniques in RL that add stochasticity do not necessarily prevent or detect overfitting. In particular, the same agents and learning algorithms could have drastically different test performance, even when all of them achieve optimal rewards during training. The observations call for more principled and careful evaluation protocols in RL. We conclude with a general discussion on overfitting in RL and a study of the generalization behaviors from the perspective of inductive bias.",
"title": ""
},
{
"docid": "20e19999be17bce4ba3ae6d94400ba3c",
"text": "Due to the coarse granularity of data accesses and the heavy use of latches, indices in the B-tree family are not efficient for in-memory databases, especially in the context of today's multi-core architecture. In this paper, we study the parallelizability of skip lists for the parallel and concurrent environment, and present PSL, a Parallel in-memory Skip List that lends itself naturally to the multi-core environment, particularly with non-uniform memory access. For each query, PSL traverses the index in a Breadth-First-Search (BFS) to find the list node with the matching key, and exploits SIMD processing to speed up this process. Furthermore, PSL distributes incoming queries among multiple execution threads disjointly and uniformly to eliminate the use of latches and achieve a high parallelizability. The experimental results show that PSL is comparable to a readonly index, FAST, in terms of read performance, and outperforms ART and Masstree respectively by up to 30% and 5x for a variety of workloads.",
"title": ""
},
{
"docid": "0b9ae0bf6f6201249756d87a56f0005e",
"text": "To reduce energy consumption and wastage, effective energy management at home is key and an integral part of the future Smart Grid. In this paper, we present the design and implementation of Green Home Service (GHS) for home energy management. Our approach addresses the key issues of home energy management in Smart Grid: a holistic management solution, improved device manageability, and an enabler of Demand-Response. We also present the scheduling algorithms in GHS for smart energy management and show the results in simulation studies.",
"title": ""
}
] | scidocsrr |
39f0dc05e8821d849ae30548cca70e0a | Direct and indirect pathways of basal ganglia: a critical reappraisal | [
{
"docid": "cce75a31fde0740700087125c884e862",
"text": "Neural circuits of the basal ganglia are critical for motor planning and action selection. Two parallel basal ganglia pathways have been described, and have been proposed to exert opposing influences on motor function. According to this classical model, activation of the ‘direct’ pathway facilitates movement and activation of the ‘indirect’ pathway inhibits movement. However, more recent anatomical and functional evidence has called into question the validity of this hypothesis. Because this model has never been empirically tested, the specific function of these circuits in behaving animals remains unknown. Here we report direct activation of basal ganglia circuitry in vivo, using optogenetic control of direct- and indirect-pathway medium spiny projection neurons (MSNs), achieved through Cre-dependent viral expression of channelrhodopsin-2 in the striatum of bacterial artificial chromosome transgenic mice expressing Cre recombinase under control of regulatory elements for the dopamine D1 or D2 receptor. Bilateral excitation of indirect-pathway MSNs elicited a parkinsonian state, distinguished by increased freezing, bradykinesia and decreased locomotor initiations. In contrast, activation of direct-pathway MSNs reduced freezing and increased locomotion. In a mouse model of Parkinson’s disease, direct-pathway activation completely rescued deficits in freezing, bradykinesia and locomotor initiation. Taken together, our findings establish a critical role for basal ganglia circuitry in the bidirectional regulation of motor behaviour and indicate that modulation of direct-pathway circuitry may represent an effective therapeutic strategy for ameliorating parkinsonian motor deficits.",
"title": ""
}
] | [
{
"docid": "e82d3eedc733d536c49a69856ad66e00",
"text": "Artificial neural networks, trained only on sample deals, without presentation of any human knowledge or even rules of the game, are used to estimate the number of tricks to be taken by one pair of bridge players in the so-called double dummy bridge problem (DDBP). Four representations of a deal in the input layer were tested leading to significant differences in achieved results. In order to test networks' abilities to extract knowledge from sample deals, experiments with additional inputs representing estimators of hand's strength used by humans were also performed. The superior network trained solely on sample deals outperformed all other architectures, including those using explicit human knowledge of the game of bridge. Considering the suit contracts, this network, in a sample of 100 000 testing deals, output a perfect answer in 53.11% of the cases and only in 3.52% of them was mistaken by more than one trick. The respective figures for notrump contracts were equal to 37.80% and 16.36%. The above results were compared with the ones obtained by 24 professional human bridge players-members of The Polish Bridge Union-on test sets of sizes between 27 and 864 deals per player (depending on player's time availability). In case of suit contracts, the perfect answer was obtained in 53.06% of the testing deals for ten upper-classified players and in 48.66% of them, for the remaining 14 participants of the experiment. For the notrump contracts, the respective figures were equal to 73.68% and 60.78%. Except for checking the ability of neural networks in solving the DDBP, the other goal of this research was to analyze connection weights in trained networks in a quest for weights' patterns that are explainable by experienced human bridge players. Quite surprisingly, several such patterns were discovered (e.g., preference for groups of honors, drawing special attention to Aces, favoring cards from a trump suit, gradual importance of cards in one suit-from two to the Ace, etc.). Both the numerical figures and weight patterns are stable and repeatable in a sample of neural architectures (differing only by randomly chosen initial weights). In summary, the piece of research described in this paper provides a detailed comparison between various data representations of the DDBP solved by neural networks. On a more general note, this approach can be extended to a certain class of binary classification problems.",
"title": ""
},
{
"docid": "397f1c1a01655098d8b35b04011400c7",
"text": "Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.",
"title": ""
},
{
"docid": "a0a46f9ec5221b1a6c95bb8c45f1a8a7",
"text": "This paper describes the steps for achieving data processing in a methodological context, which take part of a methodology previously proposed by the authors for developing Data Mining (DM) applications, called \"Methodology for the development of data mining applications based on organizational analysis\". The methodology has three main phases: Knowledge of the Organization, Preparation and treatment of data, and finally, development of the DM application. We will focus on the second phase. The main contribution of this proposal is the design of a methodological framework of the second phase based on the paradigm of Data Science (DS), in order to get what we have called “Vista Minable Operacional” (VMO) from the “Vista Minable Conceptual” (VMC). The VMO built is used in the third phase. This methodological framework has been applied in two different cases of study, oil and public health.",
"title": ""
},
{
"docid": "5bd61380b9b05b3e89d776c6cbeb0336",
"text": "Cross-domain text classification aims to automatically train a precise text classifier for a target domain by using labelled text data from a related source domain. To this end, one of the most promising ideas is to induce a new feature representation so that the distributional difference between domains can be reduced and a more accurate classifier can be learned in this new feature space. However, most existing methods do not explore the duality of the marginal distribution of examples and the conditional distribution of class labels given labeled training examples in the source domain. Besides, few previous works attempt to explicitly distinguish the domain-independent and domain-specific latent features and align the domain-specific features to further improve the cross-domain learning. In this paper, we propose a model called Partially Supervised Cross-Collection LDA topic model (PSCCLDA) for cross-domain learning with the purpose of addressing these two issues in a unified way. Experimental results on nine datasets show that our model outperforms two standard classifiers and four state-of-the-art methods, which demonstrates the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "1f95cc7adafe07ad9254359ab405a980",
"text": "Event-driven programming is a popular model for writing programs for tiny embedded systems and sensor network nodes. While event-driven programming can keep the memory overhead down, it enforces a state machine programming style which makes many programs difficult to write, maintain, and debug. We present a novel programming abstraction called protothreads that makes it possible to write event-driven programs in a thread-like style, with a memory overhead of only two bytes per protothread. We show that protothreads significantly reduce the complexity of a number of widely used programs previously written with event-driven state machines. For the examined programs the majority of the state machines could be entirely removed. In the other cases the number of states and transitions was drastically decreased. With protothreads the number of lines of code was reduced by one third. The execution time overhead of protothreads is on the order of a few processor cycles.",
"title": ""
},
{
"docid": "b226a7914df82ee82ec051a7ba76fc87",
"text": "Network science plays a big role in the representation of real-world phenomena such as user-item bipartite networks presented in e-commerce or social media platforms. It provides researchers with tools and techniques to solve complex real-world problems. Identifying and predicting future popularity and importance of items in e-commerce or social media platform is a challenging task. Some items gain popularity repeatedly over time while some become popular and novel only once. This work aims to identify the key-factors: popularity and novelty. To do so, we consider two types of novelty predictions: items appearing in the popular ranking list for the first time; and items which were not in the popular list in the past time window, but might have been popular before the recent past time window. In order to identify the popular items, a careful consideration of macro-level analysis is needed. In this work we propose a model, which exploits item level information over a span of time to rank the importance of the item. We considered ageing or decay effect along with the recent link-gain of the items. We test our proposed model on four various real-world datasets using four information retrieval based metrics.",
"title": ""
},
{
"docid": "1e4a502bfd4ae5ceffd922e48f8e364a",
"text": "A soft wearable robot, which is an emerging type of wearable robot, can take advantage of tendon-driven mechanisms with a Bowden cable. These tendon-driven mechanisms benefits soft wearable robots because the actuator can be remotely placed and the transmission is very compact. However, it is difficult to compensate the friction along the Bowden cable which makes it hard to control. This study proposes the use of a position-based impedance controller, which is robust to the nonlinear dynamics of the system and provides compliant interaction between robot, human, and environment. Additionally, to eliminate disturbances from unexpected tension of the antagonistic wire arising from friction, this study proposes a new type of slack enabling tendon actuator. It can eliminate friction force along the antagonistic wire by actively pushing the wire while preventing derailment of the wire from the spool.",
"title": ""
},
{
"docid": "d4e5a5aa65017360db9a87590a728892",
"text": "This work presents a chaotic path planning generator which is used in autonomous mobile robots, in order to cover a terrain. The proposed generator is based on a nonlinear circuit, which shows chaotic behavior. The bit sequence, produced by the chaotic generator, is converted to a sequence of planned positions, which satisfies the requirements for unpredictability and fast scanning of the entire terrain. The nonlinear circuit and the trajectory-planner are described thoroughly. Simulation tests confirm that with the proposed path planning generator better results can be obtained with regard to previous works. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "46ad437443c58d90d4956d4e8ba99888",
"text": "The attributes of individual software engineers are perhaps the most important factors in determining the success of software development. Our goal is to identify the professional competencies that are most essential. In particular, we seek to identify the attributes that di erentiate between exceptional and non-exceptional software engineers. Phase 1 of our research is a qualitative study designed to identify competencies to be used in the quantitative analysis performed in Phase 2. In Phase 1, we conduct an in-depth review of ten exceptional and ten non-exceptional software engineers working for a major computing rm. We use biographical data and Myers-Briggs Type Indicator test results to characterize our sample. We conduct Critical Incident Interviews focusing on the subjects experience in software and identify 38 essential competencies of software engineers. Phase 2 of this study surveys 129 software engineers to determine the competencies that are di erential between exceptional and non-exceptional engineers. Years of experience in software is the only biographical predictor of performance. Analysis of the participants Q-Sort of the 38 competencies identi ed in Phase 1 reveals that nine of these competencies are di erentially related to engineer performance using a t-test. A ten variable Canonical Discrimination Function consisting of three biographical variables and seven competencies is capable of correctly classifying 81% of the cases. The statistical analyses indicate that exceptional engineers (at the company studied) can be distinguished by behaviors associated with an external focus | behaviors directed at people or objects outside the individual. Exceptional engineers are more likely than non-exceptional engineers to maintain a \\big picture\", have a bias for action, be driven by a sense of mission, exhibit and articulate strong convictions, play a pro-active role with management, and help other engineers. Authors addresses: R. Turley, Colorado Memory Systems, Inc., 800 S. Taft Ave., Loveland, CO 80537. Email: RICKTURL.COMEMSYS@CMS SMTP.gr.hp.com, (303) 635-6490, Fax: (303) 635-6613; J. Bieman, Department of Computer Science, Colorado State University, Fort Collins, CO 80523. Email: bieman@cs.colostate.edu, (303)4917096, Fax: (303) 491-6639. Copyright c 1993 by Richard T. Turley and James M. Bieman. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the author. Direct correspondence concerning this paper to: J. Bieman, Department of Computer Science, Colorado State University, Fort Collins, CO 80523, bieman@cs.colostate.edu, (303)491-7096, Fax: (303)491-6639.",
"title": ""
},
{
"docid": "0884651e01add782a7d58b40f6ba078f",
"text": "Several statistics have been published dealing with failure causes of high voltage rotating machines i n general and power generators in particular [1 4]. Some of the se statistics only specify the part of the machine which failed without giving any deeper insight in the failure mechanism. Other publications distinguish between the damage which caused the machine to fail and the root cause which effect ed the damage. The survey of 1199 hydrogenerators c ar ied out by the CIGRE study committee SC11, EG11.02 provides an ex mple of such an investigation [5]. It gives det ail d results of 69 incidents. 56% of the failed machines showed an insulation damage, other major types being mecha ni al, thermal and bearing damages (Figure 1a). Root causes which led to these damages are subdivided into 7 differen t groups (Figure 1b).",
"title": ""
},
{
"docid": "09ae8e02304fb3a179d343ab7f20c6cb",
"text": "Object-based point cloud analysis (OBPA) is useful for information extraction from airborne LiDAR point clouds. An object-based classification method is proposed for classifying the airborne LiDAR point clouds in urban areas herein. In the process of classification, the surface growing algorithm is employed to make clustering of the point clouds without outliers, thirteen features of the geometry, radiometry, topology and echo characteristics are calculated, a support vector machine (SVM) is utilized to classify the segments, and connected component analysis for 3D point clouds is proposed to optimize the original classification results. Three datasets with different point densities and complexities are employed to test our method. Experiments suggest that the proposed method is capable of making a classification of the urban point clouds with the overall classification accuracy larger than 92.34% and the Kappa coefficient larger than 0.8638, and the classification accuracy is promoted with the increasing of the point density, which is meaningful for various types of applications. Keyword: airborne LiDAR; object-based classification; point clouds; segmentation; SVM OPEN ACCESS Remote Sens. 2013, 5 3750",
"title": ""
},
{
"docid": "c2d926337d32cf88838546d19e6f9bde",
"text": "This paper discusses the use of natural language or „conversational‟ agents in e-learning environments. We describe and contrast the various applications of conversational agent technology represented in the e-learning literature, including tutors, learning companions, language practice and systems to encourage reflection. We offer two more detailed examples of conversational agents, one which provides learning support, and the other support for self-assessment. Issues and challenges for developers of conversational agent systems for e-learning are identified and discussed.",
"title": ""
},
{
"docid": "98f141ce5edc52f6cacdba7dcf028f3a",
"text": "We consider the problem of automatically acquiring knowledge about the typical temporal orderings among relations (e.g., actedIn(person, film) typically occurs before wonPrize (film, award)), given only a database of known facts (relation instances) without time information, and a large document collection. Our approach is based on the conjecture that the narrative order of verb mentions within documents correlates with the temporal order of the relations they represent. We propose a family of algorithms based on this conjecture, utilizing a corpus of 890m dependency parsed sentences to obtain verbs that represent relations of interest, and utilizing Wikipedia documents to gather statistics on narrative order of verb mentions. Our proposed algorithm, GraphOrder, is a novel and scalable graph-based label propagation algorithm that takes transitivity of temporal order into account, as well as these statistics on narrative order of verb mentions. This algorithm achieves as high as 38.4% absolute improvement in F1 over a random baseline. Finally, we demonstrate the utility of this learned general knowledge about typical temporal orderings among relations, by showing that these temporal constraints can be successfully used by a joint inference framework to assign specific temporal scopes to individual facts.",
"title": ""
},
{
"docid": "20fd36e287a631c82aa8527e6a36931f",
"text": "Creating a mesh is the first step in a wide range of applications, including scientific computing and computer graphics. An unstructured simplex mesh requires a choice of meshpoints (vertex nodes) and a triangulation. We want to offer a short and simple MATLAB code, described in more detail than usual, so the reader can experiment (and add to the code) knowing the underlying principles. We find the node locations by solving for equilibrium in a truss structure (using piecewise linear force-displacement relations) and we reset the topology by the Delaunay algorithm. The geometry is described implicitly by its distance function. In addition to being much shorter and simpler than other meshing techniques, our algorithm typically produces meshes of very high quality. We discuss ways to improve the robustness and the performance, but our aim here is simplicity. Readers can download (and edit) the codes from http://math.mit.edu/~persson/mesh.",
"title": ""
},
{
"docid": "44ca351c024e61b06b1709ba0e4db44f",
"text": "Rootkits affect system security by modifying kernel data structures to achieve a variety of malicious goals. While early rootkits modified control data structures, such as the system call table and values of function pointers, recent work has demonstrated rootkits that maliciously modify noncontrol data. Most prior techniques for rootkit detection have focused solely on detecting control data modifications and, therefore, fail to detect such rootkits. This paper presents a novel technique to detect rootkits that modify both control and noncontrol data. The main idea is to externally observe the execution of the kernel during an inference phase and hypothesize invariants on kernel data structures. A rootkit detection phase uses these invariants as specifications of data structure integrity. During this phase, violation of invariants indicates an infection. We have implemented Gibraltar, a prototype tool that infers kernel data structure invariants and uses them to detect rootkits. Experiments show that Gibraltar can effectively detect previously known rootkits, including those that modify noncontrol data structures.",
"title": ""
},
{
"docid": "3c24165f70675be40ad42f8d8ce09c33",
"text": "This paper describes the design of a smart, motorized, voice controlled wheelchair using embedded system. Proposed design supports voice activation system for physically differently abled persons incorporating manual operation. This paper represents the “Voice-controlled Wheel chair” for the physically differently abled person where the voice command controls the movements of the wheelchair. The voice command is given through a cellular device having Bluetooth and the command is transferred and converted to string by the BT Voice Control for Arduino and is transferred to the Bluetooth Module SR-04connected to the Arduino board for the control of the Wheelchair. For example, when the user says „Go‟ then chair will move in forward direction and when he says „Back‟ then the chair will move in backward direction and similarly „Left‟, „Right‟ for rotating it in left and right directions respectively and „Stop‟ for making it stop. This system was designed and developed to save cost, time and energy of the patient. Ultrasonic sensor is also made a part of the design and it helps to detect obstacles lying ahead in the way of the wheelchair that can hinder the passage of the wheelchair.",
"title": ""
},
{
"docid": "96793943025ba92a7672444597b3a443",
"text": "This document describes Tree Kernel-SVM based methods for identifying sentences that could be improved in scientific text. This has the goal of contributing to the body of knowledge that attempt to build assistive tools to aid scientist improve the quality of their writings. Our methods consist of a combination of the output from multiple support vector machines which use Tree Kernel computations. Therefore, features for individual sentences are trees that reflect their grammatical structure. For the AESW 2016 Shared Task we built systems that provide probabilistic and binary outputs by using these models for trees comparisons.",
"title": ""
},
{
"docid": "8fd5b35d456e99df004c8899c1c22653",
"text": "The area of cluster-level energy management has attracted s ignificant research attention over the past few years. One class of techniques to reduce the energy consumption of clusters is to sel ectively power down nodes during periods of low utilization to increa s energy efficiency. One can think of a number of ways of selective ly powering down nodes, each with varying impact on the workloa d response time and overall energy consumption. Since the Map Reduce framework is becoming “ubiquitous”, the focus of this p aper is on developing a framework for systematically considerin g various MapReduce node power down strategies, and their impact o n the overall energy consumption and workload response time. We closely examine two extreme techniques that can be accommodated in this framework. The first is based on a recently pro posed technique called “Covering Set” (CS) that keeps only a sm ll fraction of the nodes powered up during periods of low utiliz ation. At the other extreme is a technique that we propose in this pap er, called the All-In Strategy (AIS). AIS uses all the nodes in th e cluster to run a workload and then powers down the entire cluster. Using both actual evaluation and analytical modeling we bring out the differences between these two extreme techniques and show t hat AIS is often the right energy saving strategy.",
"title": ""
},
{
"docid": "55a798fd7ec96239251fce2a340ba1ba",
"text": "At EUROCRYPT’88, we introduced an interactive zero-howledge protocol ( G ~ O U and Quisquater [13]) fitted to the authentication of tamper-resistant devices (e.g. smart cads , Guillou and Ugon [14]). Each security device stores its secret authentication number, an RSA-like signature computed by an authority from the device identity. Any transaction between a tamperresistant security device and a verifier is limited to a unique interaction: the device sends its identity and a random test number; then the verifier teUs a random large question; and finally the device answers by a witness number. The transaction is successful when the test number is reconstructed from the witness number, the question and the identity according to numbers published by the authority and rules of redundancy possibly standardized. This protocol allows a cooperation between users in such a way that a group of cooperative users looks like a new entity, having a shadowed identity the product of the individual shadowed identities, while each member reveals nothing about its secret. In another scenario, the secret is partitioned between distinkt devices sharing the same identity. A group of cooperative users looks like a unique user having a larger public exponent which is the greater common multiple of each individual exponent. In this paper, additional features are introduced in order to provide: firstly, a mutual interactive authentication of both communicating entities and previously exchanged messages, and, secondly, a digital signature of messages, with a non-interactive zero-knowledge protocol. The problem of multiple signature is solved here in a very smart way due to the possibilities of cooperation between users. The only secret key is the factors of the composite number chosen by the authority delivering one authentication number to each smart card. This key is not known by the user. At the user level, such a scheme may be considered as a keyless identity-based integrity scheme. This integrity has a new and important property: it cannot be misused, i.e. derived into a confidentiality scheme.",
"title": ""
}
] | scidocsrr |
f71fbccca7f7cca0a0e87fce5e1e9f92 | Generative Adversarial Privacy | [
{
"docid": "e083b5fdf76bab5cdc8fcafc77db23f7",
"text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.",
"title": ""
},
{
"docid": "5c716fbdc209d5d9f703af1e88f0d088",
"text": "Protecting visual secrets is an important problem due to the prevalence of cameras that continuously monitor our surroundings. Any viable solution to this problem should also minimize the impact on the utility of applications that use images. In this work, we build on the existing work of adversarial learning to design a perturbation mechanism that jointly optimizes privacy and utility objectives. We provide a feasibility study of the proposed mechanism and present ideas on developing a privacy framework based on the adversarial perturbation mechanism.",
"title": ""
}
] | [
{
"docid": "6b8281957b0fd7e9ff88f64b8b6462aa",
"text": "As Critical National Infrastructures are becoming more vulnerable to cyber attacks, their protection becomes a significant issue for any organization as well as a nation. Moreover, the ability to attribute is a vital element of avoiding impunity in cyberspace. In this article, we present main threats to critical infrastructures along with protective measures that one nation can take, and which are classified according to legal, technical, organizational, capacity building, and cooperation aspects. Finally we provide an overview of current methods and practices regarding cyber attribution and cyber peace keeping.",
"title": ""
},
{
"docid": "791f440add573b1c35daca1d6eb7bcf4",
"text": "PURPOSE\nNivolumab, a programmed death-1 (PD-1) immune checkpoint inhibitor antibody, has demonstrated improved survival over docetaxel in previously treated advanced non-small-cell lung cancer (NSCLC). First-line monotherapy with nivolumab for advanced NSCLC was evaluated in the phase I, multicohort, Checkmate 012 trial.\n\n\nMETHODS\nFifty-two patients received nivolumab 3 mg/kg intravenously every 2 weeks until progression or unacceptable toxicity; postprogression treatment was permitted per protocol. The primary objective was to assess safety; secondary objectives included objective response rate (ORR) and 24-week progression-free survival (PFS) rate; overall survival (OS) was an exploratory end point.\n\n\nRESULTS\nAny-grade treatment-related adverse events (AEs) occurred in 71% of patients, most commonly: fatigue (29%), rash (19%), nausea (14%), diarrhea (12%), pruritus (12%), and arthralgia (10%). Ten patients (19%) reported grade 3 to 4 treatment-related AEs; grade 3 rash was the only grade 3 to 4 event occurring in more than one patient (n = 2; 4%). Six patients (12%) discontinued because of a treatment-related AE. The confirmed ORR was 23% (12 of 52), including four ongoing complete responses. Nine of 12 responses (75%) occurred by first tumor assessment (week 11); eight (67%) were ongoing (range, 5.3+ to 25.8+ months) at the time of data lock. ORR was 28% (nine of 32) in patients with any degree of tumor PD-ligand 1 expression and 14% (two of 14) in patients with no PD-ligand 1 expression. Median PFS was 3.6 months, and the 24-week PFS rate was 41% (95% CI, 27 to 54). Median OS was 19.4 months, and the 1-year and 18-month OS rates were 73% (95% CI, 59 to 83) and 57% (95% CI, 42 to 70), respectively.\n\n\nCONCLUSION\nFirst-line nivolumab monotherapy demonstrated a tolerable safety profile and durable responses in first-line advanced NSCLC.",
"title": ""
},
{
"docid": "0ae0e78ac068d8bc27d575d90293c27b",
"text": "Deep web refers to the hidden part of the Web that remains unavailable for standard Web crawlers. To obtain content of Deep Web is challenging and has been acknowledged as a significant gap in the coverage of search engines. To this end, the paper proposes a novel deep web crawling framework based on reinforcement learning, in which the crawler is regarded as an agent and deep web database as the environment. The agent perceives its current state and selects an action (query) to submit to the environment according to Q-value. The framework not only enables crawlers to learn a promising crawling strategy from its own experience, but also allows for utilizing diverse features of query keywords. Experimental results show that the method outperforms the state of art methods in terms of crawling capability and breaks through the assumption of full-text search implied by existing methods.",
"title": ""
},
{
"docid": "d8acda345bbcb1ef25e3ee9934dd12a2",
"text": "This chapter looks into the key infrastructure factors affecting the success of small companies in developing economies that are establishing B2B ecommerce ventures by aggregating critical success factors from general ecommerce studies and studies from e-commerce in developing countries. The factors were identified through a literature review and case studies of two organizations. The results of the pilot study and literature review reveal five groups of success factors that contribute to the success of B2B e-commerce. These factors were later assessed for importance using a survey. The outcome of our analysis reveals a reduced list of key critical success factors that SMEs should emphasize as well as a couple of key policy implications for governments in developing countries. This chapter appears in the book, e-Business, e-Government & Small and Medium-Sized Enterprises: Opportunities and Challenges, edited by Brian J. Corbitt and Nabeel Al-Qirim. Copyright © 2004, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. 701 E. Chocolate Avenue, Suite 200, Hershey PA 17033-1240, USA Tel: 717/533-8845; Fax 717/533-8661; URL-http://www.idea-group.com IDEA GROUP PUBLISHING 186 Jennex, Amoroso and Adelakun Copyright © 2004, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. INTRODUCTION Information and Communication Technology (ICT) can provide a small enterprise an opportunity to conduct business anywhere. Use of the Internet allows small businesses to project virtual storefronts to the world as well as conduct business with other organizations. Heeks and Duncombe (2001) discuss how IT can be used in developing countries to build businesses. Domaracki (2001) discusses how the technology gap between small and large businesses is closing and evening the playing field, making B2B and B2C e-commerce available to any business with access to computers, web browsers, and telecommunication links. This chapter discusses how small start-up companies can use ICT to establish e-commerce applications within developing economies where the infrastructure is not classified as “high-technology”. E-commerce is the process of buying, selling, or exchanging products, services, and information using computer networks including the Internet (Turban et al., 2002). Kalakota and Whinston (1997) define e-commerce using the perspectives of network communications, automated business processes, automated services, and online buying and selling. Turban et al. (2002) add perspectives on collaboration and community. Deise et al. (2000) describe the E-selling process as enabling customers through E-Browsing (catalogues, what we have), E-Buying (ordering, processing, invoicing, cost determination, etc.), and E-Customer Service (contact, etc.). Partial e-commerce occurs when the process is not totally using networks. B2C e-commerce is the electronic sale of goods, services, and content to individuals, Noyce (2002), Turban et al. (2002). B2B e-commerce is a transaction conducted electronically between businesses over the Internet, extranets, intranets, or private networks. Such transactions may be conducted between a business and its supply chain members, as well as between a business and any other business. A business refers to any organization, public or private, for profit or nonprofit (Turban et al., 2002, p. 217; Noyce, 2002; Palvia and Vemuri, 2002). Initially, B2B was used almost exclusively by large organizations to buy and sell industrial outputs and/or inputs. More recently B2B has expanded to small and medium sized enterprises, SMEs, who can buy and/or sell products/services directly, Mayer-Guell (2001). B2B transactions tend to be larger in value, more complex, and longer term when compared to B2C transactions with the average B2B transaction being worth $75,000.00 while the average B2C transaction is worth $75.00 (Freeman, 2001). Typical B2B transactions involve order management, credit management and the establishment of trade terms, product delivery and billing, invoice approval, payment, and the management of information for the entire process, Domaracki (2001). Noyce (2002) discusses collaboration as the underlying principle for B2B. The companies chosen as mini-cases for this study meet the basic definition of B2B with their e-commerce ventures as both are selling services over the Internet to other business organizations. Additionally, both provide quotes and the ability to 19 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/chapter/b2b-commerce-infrastructuresuccess-factors/8749?camid=4v1 This title is available in InfoSci-Books, Business-TechnologySolution, InfoSci-Business Technologies, Business, Administration, and Management, InfoSci-Select, InfoSciBusiness and Management, InfoSci-Government and Law, InfoSci-Select, InfoSci-Select. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=1",
"title": ""
},
{
"docid": "5896289f0a9b788ef722756953a580ce",
"text": "Biodiesel, defined as the mono-alkyl esters of vegetable oils or animal fats, is an balternativeQ diesel fuel that is becoming accepted in a steadily growing number of countries around the world. Since the source of biodiesel varies with the location and other sources such as recycled oils are continuously gaining interest, it is important to possess data on how the various fatty acid profiles of the different sources can influence biodiesel fuel properties. The properties of the various individual fatty esters that comprise biodiesel determine the overall fuel properties of the biodiesel fuel. In turn, the properties of the various fatty esters are determined by the structural features of the fatty acid and the alcohol moieties that comprise a fatty ester. Structural features that influence the physical and fuel properties of a fatty ester molecule are chain length, degree of unsaturation, and branching of the chain. Important fuel properties of biodiesel that are influenced by the fatty acid profile and, in turn, by the structural features of the various fatty esters are cetane number and ultimately exhaust emissions, heat of combustion, cold flow, oxidative stability, viscosity, and lubricity. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "5179662c841302180848dc566a114f10",
"text": "Hyperspectral image (HSI) unmixing has attracted increasing research interests in recent decades. The major difficulty of it lies in that the endmembers and the associated abundances need to be separated from highly mixed observation data with few a priori information. Recently, sparsity-constrained nonnegative matrix factorization (NMF) algorithms have been proved effective for hyperspectral unmixing (HU) since they can sufficiently utilize the sparsity property of HSIs. In order to improve the performance of NMF-based unmixing approaches, spectral and spatial constrains have been added into the unmixing model, but spectral-spatial joint structure is required to be more accurately estimated. To exploit the property that similar pixels within a small spatial neighborhood have higher possibility to share similar abundances, hypergraph structure is employed to capture the similarity relationship among the spatial nearby pixels. In the construction of a hypergraph, each pixel is taken as a vertex of the hypergraph, and each vertex with its k nearest spatial neighboring pixels form a hyperedge. Using the hypergraph, the pixels with similar abundances can be accurately found, which enables the unmixing algorithm to obtain promising results. Experiments on synthetic data and real HSIs are conducted to investigate the performance of the proposed algorithm. The superiority of the proposed algorithm is demonstrated by comparing it with some state-of-the-art methods.",
"title": ""
},
{
"docid": "587eea887a3fcb6561833c250ae9c6e3",
"text": "We present a new interactive and online approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems where capture, labeling, and batch learning often take hours or even days to perform, our approach is fully online. This provides users with continuous live feedback of the recognition during capture, allowing to immediately correct errors in the segmentation and/or learning—a feature that has so far been unavailable to batch and offline methods. This leads to models that are tailored or personalized specifically to the user's environments and object classes of interest, opening up the potential for new applications in augmented reality, interior design, and human/robot navigation. It also provides the ability to capture substantial labeled 3D datasets for training large-scale visual recognition systems.",
"title": ""
},
{
"docid": "d509601659e2192fb4ea8f112c9d75fe",
"text": "Computer vision has advanced significantly that many discriminative approaches such as object recognition are now widely used in real applications. We present another exciting development that utilizes generative models for the mass customization of medical products such as dental crowns. In the dental industry, it takes a technician years of training to design synthetic crowns that restore the function and integrity of missing teeth. Each crown must be customized to individual patients, and it requires human expertise in a time-consuming and laborintensive process, even with computer assisted design software. We develop a fully automatic approach that learns not only from human designs of dental crowns, but also from natural spatial profiles between opposing teeth. The latter is hard to account for by technicians but important for proper biting and chewing functions. Built upon a Generative Adversarial Network architecture (GAN), our deep learning model predicts the customized crown-filled depth scan from the crown-missing depth scan and opposing depth scan. We propose to incorporate additional space constraints and statistical compatibility into learning. Our automatic designs exceed human technicians’ standards for good morphology and functionality, and our algorithm is being tested for production use.",
"title": ""
},
{
"docid": "ee79f55fe096b195984ecdc1fc570179",
"text": "In bibliographies like DBLP and Citeseer, there are three kinds of entity-name problems that need to be solved. First, multiple entities share one name, which is called the name sharing problem. Second, one entity has different names, which is called the name variant problem. Third, multiple entities share multiple names, which is called the name mixing problem. We aim to solve these problems based on one model in this paper. We call this task complete entity resolution. Different from previous work, our work use global information based on data with two types of information, words and author names. We propose a generative latent topic model that involves both author names and words — the LDA-dual model, by extending the LDA (Latent Dirichlet Allocation) model. We also propose a method to obtain model parameters that is global information. Based on obtained model parameters, we propose two algorithms to solve the three problems mentioned above. Experimental results demonstrate the effectiveness and great potential of the proposed model and algorithms.",
"title": ""
},
{
"docid": "a1292045684debec0e6e56f7f5e85fad",
"text": "BACKGROUND\nLncRNA and microRNA play an important role in the development of human cancers; they can act as a tumor suppressor gene or an oncogene. LncRNA GAS5, originating from the separation from tumor suppressor gene cDNA subtractive library, is considered as an oncogene in several kinds of cancers. The expression of miR-221 affects tumorigenesis, invasion and metastasis in multiple types of human cancers. However, there's very little information on the role LncRNA GAS5 and miR-221 play in CRC. Therefore, we conducted this study in order to analyze the association of GAS5 and miR-221 with the prognosis of CRC and preliminary study was done on proliferation, metastasis and invasion of CRC cells. In the present study, we demonstrate the predictive value of long non-coding RNA GAS5 (lncRNA GAS5) and mircoRNA-221 (miR-221) in the prognosis of colorectal cancer (CRC) and their effects on CRC cell proliferation, migration and invasion.\n\n\nMETHODS\nOne hundred and fifty-eight cases with CRC patients and 173 cases of healthy subjects that with no abnormalities, who've been diagnosed through colonoscopy between January 2012 and January 2014 were selected for the study. After the clinicopathological data of the subjects, tissue, plasma and exosomes were collected, lncRNA GAS5 and miR-221 expressions in tissues, plasma and exosomes were measured by reverse transcription quantitative polymerase chain reaction (RT-qPCR). The diagnostic values of lncRNA GAS5 and miR-221 expression in tissues, plasma and exosomes in patients with CRC were analyzed using receiver operating characteristic curve (ROC). Lentiviral vector was constructed for the overexpression of lncRNA GAS5, and SW480 cell line was used for the transfection of the experiment and assigned into an empty vector and GAS5 groups. The cell proliferation, migration and invasion were tested using a cell counting kit-8 assay and Transwell assay respectively.\n\n\nRESULTS\nThe results revealed that LncRNA GAS5 was upregulated while the miR-221 was downregulated in the tissues, plasma and exosomes of patients with CRC. The results of ROC showed that the expressions of both lncRNA GAS5 and miR-221 in the tissues, plasma and exosomes had diagnostic value in CRC. While the LncRNA GAS5 expression in tissues, plasma and exosomes were associated with the tumor node metastasis (TNM) stage, Dukes stage, lymph node metastasis (LNM), local recurrence rate and distant metastasis rate, the MiR-221 expression in tissues, plasma and exosomes were associated with tumor size, TNM stage, Dukes stage, LNM, local recurrence rate and distant metastasis rate. LncRNA GAS5 and miR-221 expression in tissues, plasma and exosomes were found to be independent prognostic factors for CRC. Following the overexpression of GAS5, the GAS5 expressions was up-regulated and miR-221 expression was down-regulated; the rate of cell proliferation, migration and invasion were decreased.",
"title": ""
},
{
"docid": "553a86035f5013595ef61c4c19997d7c",
"text": "This paper proposes a novel self-oscillating, boost-derived (SOBD) dc-dc converter with load regulation. This proposed topology utilizes saturable cores (SCs) to offer self-oscillating and output regulation capabilities. Conventionally, the self-oscillating dc transformer (SODT) type of scheme can be implemented in a very cost-effective manner. The ideal dc transformer provides both input and output currents as pure, ripple-free dc quantities. However, the structure of an SODT-type converter will not provide regulation, and its oscillating frequency will change in accordance with the load. The proposed converter with SCs will allow output-voltage regulation to be accomplished by varying only the control current between the transformers, as occurs in a pulse-width modulation (PWM) converter. A control network that combines PWM schemes with a regenerative function is used for this converter. The optimum duty cycle is implemented to achieve low levels of input- and output-current ripples, which are characteristic of an ideal dc transformer. The oscillating frequency will spontaneously be kept near-constant, regardless of the load, without adding any auxiliary or compensation circuits. The typical voltage waveforms of the transistors are found to be close to quasisquare. The switching surges are well suppressed, and the voltage stress of the component is well clamped. The turn-on/turn-off of the switch is zero-voltage switching (ZVS), and its resonant transition can occur over a wide range of load current levels. A prototype circuit of an SOBD converter shows 86% efficiency at 48-V input, with 12-V, 100-W output, and presents an operating frequency of 100 kHz.",
"title": ""
},
{
"docid": "85d4675562eb87550c3aebf0017e7243",
"text": "Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We introduce an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events. We describe a Web service that leverages this framework to track political memes in Twitter and help detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We present some cases of abusive behaviors uncovered by our service. Finally, we discuss promising preliminary results on the detection of suspicious memes via supervised learning based on features extracted from the topology of the diffusion networks, sentiment analysis, and crowdsourced annotations.",
"title": ""
},
{
"docid": "58042f8c83e5cc4aa41e136bb4e0dc1f",
"text": "In this paper, we propose wire-free integrated sensors that monitor pulse wave velocity (PWV) and respiration, both non-electrical vital signs, by using an all-electrical method. The key techniques that we employ to obtain all-electrical and wire-free measurement are bio-impedance (BI) and analog-modulated body-channel communication (BCC), respectively. For PWV, time difference between ECG signal from the heart and BI signal from the wrist is measured. To remove wires and avoid sampling rate mismatch between ECG and BI sensors, ECG signal is sent to the BI sensor via analog BCC without any sampling. For respiration measurement, BI sensor is located at the abdomen to detect volume change during inhalation and exhalation. A prototype chip fabricated in 0.11 μm CMOS process consists of ECG, BI sensor and BCC transceiver. Measurement results show that heart rate and PWV are both within their normal physiological range. The chip consumes 1.28 mW at 1.2 V supply while occupying 5 mm×2.5 mm of area.",
"title": ""
},
{
"docid": "268a9b3a1a567c25c5ba93708b0a167b",
"text": "Many types of relations in physical, biological, social and information systems can be modeled as homogeneous or heterogeneous concept graphs. Hence, learning from and with graph embeddings has drawn a great deal of research interest recently, but developing an embedding learning method that is flexible enough to accommodate variations in physical networks is still a challenging problem. In this paper, we conjecture that the one-shot supervised learning mechanism is a bottleneck in improving the performance of the graph embedding learning, and propose to extend this by introducing a multi-shot \"unsupervised\" learning framework where a 2-layer MLP network for every shot .The framework can be extended to accommodate a variety of homogeneous and heterogeneous networks. Empirical results on several real-world data set show that the proposed model consistently and significantly outperforms existing state-of-the-art approaches on knowledge base completion and graph based multi-label classification tasks.",
"title": ""
},
{
"docid": "98cd53e6bf758a382653cb7252169d22",
"text": "We introduce a novel malware detection algorithm based on the analysis of graphs constructed from dynamically collected instruction traces of the target executable. These graphs represent Markov chains, where the vertices are the instructions and the transition probabilities are estimated by the data contained in the trace. We use a combination of graph kernels to create a similarity matrix between the instruction trace graphs. The resulting graph kernel measures similarity between graphs on both local and global levels. Finally, the similarity matrix is sent to a support vector machine to perform classification. Our method is particularly appealing because we do not base our classifications on the raw n-gram data, but rather use our data representation to perform classification in graph space. We demonstrate the performance of our algorithm on two classification problems: benign software versus malware, and the Netbull virus with different packers versus other classes of viruses. Our results show a statistically significant improvement over signature-based and other machine learning-based detection methods.",
"title": ""
},
{
"docid": "b1d9e27972b2ea9af105bc6c026fddc9",
"text": "Data diversity is critical to success when training deep learning models. Medical imaging data sets are often imbalanced as pathologic findings are generally rare, which introduces significant challenges when training deep learning models. In this work, we propose a method to generate synthetic abnormal MRI images with brain tumors by training a generative adversarial network using two publicly available data sets of brain MRI. We demonstrate two unique benefits that the synthetic images provide. First, we illustrate improved performance on tumor segmentation by leveraging the synthetic images as a form of data augmentation. Second, we demonstrate the value of generative models as an anonymization tool, achieving comparable tumor segmentation results when trained on the synthetic data versus when trained on real subject data. Together, these results offer a potential solution to two of the largest challenges facing machine learning in medical imaging, namely the small incidence of pathological findings, and the restrictions around sharing of patient data.",
"title": ""
},
{
"docid": "9b656d1ae57b43bb2ccf2d971e46eae3",
"text": "On the one hand, enterprises manufacturing any kinds of goods require agile production technology to be able to fully accommodate their customers’ demand for flexibility. On the other hand, Smart Objects, such as networked intelligent machines or tagged raw materials, exhibit ever increasing capabilities, up to the point where they offer their smart behaviour as web services. The two trends towards higher flexibility and more capable objects will lead to a service-oriented infrastructure where complex processes will span over all types of systems — from the backend enterprise system down to the Smart Objects. To fully support this, we present SOCRADES, an integration architecture that can serve the requirements of future manufacturing. SOCRADES provides generic components upon which sophisticated production processes can be modelled. In this paper we in particular give a list of requirements, the design, and the reference implementation of that integration architecture.",
"title": ""
},
{
"docid": "7519e3a8326e2ef2ebd28c22e80c4e34",
"text": "This paper presents a synthetic framework identifying the central drivers of start-up commercialization strategy and the implications of these drivers for industrial dynamics. We link strategy to the commercialization environment – the microeconomic and strategic conditions facing a firm that is translating an \" idea \" into a value proposition for customers. The framework addresses why technology entrepreneurs in some environments undermine established firms, while others cooperate with incumbents and reinforce existing market power. Our analysis suggests that competitive interaction between start-up innovators and established firms depends on the presence or absence of a \" market for ideas. \" By focusing on the operating requirements, efficiency, and institutions associated with markets for ideas, this framework holds several implications for the management of high-technology entrepreneurial firms. (Stern). We would like to thank the firms who participate in the MIT Commercialization Strategies survey for their time and effort. The past two decades have witnessed a dramatic increase in investment in technology entrepreneurship – the founding of small, start-up firms developing inventions and technology with significant potential commercial application. Because of their youth and small size, start-up innovators usually have little experience in the markets for which their innovations are most appropriate, and they have at most two or three technologies at the stage of potential market introduction. For these firms, a key management challenge is how to translate promising",
"title": ""
},
{
"docid": "bd2fcdd0b7139bf719f1ec7ffb4fe5d5",
"text": "Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.",
"title": ""
},
{
"docid": "1d507afcd430b70944bd7f460ee90277",
"text": "Moringa oleifera, or the horseradish tree, is a pan-tropical species that is known by such regional names as benzolive, drumstick tree, kelor, marango, mlonge, mulangay, nébéday, saijhan, and sajna. Over the past two decades, many reports have appeared in mainstream scientific journals describing its nutritional and medicinal properties. Its utility as a non-food product has also been extensively described, but will not be discussed herein, (e.g. lumber, charcoal, fencing, water clarification, lubricating oil). As with many reports of the nutritional or medicinal value of a natural product, there are an alarming number of purveyors of “healthful” food who are now promoting M. oleifera as a panacea. While much of this recent enthusiasm indeed appears to be justified, it is critical to separate rigorous scientific evidence from anecdote. Those who charge a premium for products containing Moringa spp. must be held to a high standard. Those who promote the cultivation and use of Moringa spp. in regions where hope is in short supply must be provided with the best available evidence, so as not to raise false hopes and to encourage the most fruitful use of scarce research capital. It is the purpose of this series of brief reviews to: (a) critically evaluate the published scientific evidence on M. oleifera, (b) highlight claims from the traditional and tribal medicinal lore and from non-peer reviewed sources that would benefit from further, rigorous scientific evaluation, and (c) suggest directions for future clinical research that could be carried out by local investigators in developing regions. This is the first of four planned papers on the nutritional, therapeutic, and prophylactic properties of Moringa oleifera. In this introductory paper, the scientific evidence for health effects are summarized in tabular format, and the strength of evidence is discussed in very general terms. A second paper will address a select few uses of Moringa in greater detail than they can be dealt with in the context of this paper. A third paper will probe the phytochemical components of Moringa in more depth. A fourth paper will lay out a number of suggested research projects that can be initiated at a very small scale and with very limited resources, in geographic regions which are suitable for Moringa cultivation and utilization. In advance of this fourth paper in the series, the author solicits suggestions and will gladly acknowledge contributions that are incorporated into the final manuscript. It is the intent and hope of the journal’s editors that such a network of small-scale, locally executed investigations might be successfully woven into a greater fabric which will have enhanced scientific power over similar small studies conducted and reported in isolation. Such an approach will have the added benefit that statistically sound planning, peer review, and multi-center coordination brings to a scientific investigation. Copyright: ©2005 Jed W. Fahey This is an Open Access article distributed under the terms of the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Contact: Jed W. Fahey Email: jfahey@jhmi.edu Received: September 15, 2005 Accepted: November 20, 2005 Published: December 1, 2005 The electronic version of this article is the complete one and can be found online at: http://www.TFLJournal.org/article.php/200512011",
"title": ""
}
] | scidocsrr |
8f754bda1b9615ba479f386b86764ae7 | MFCC and its applications in speaker recognition | [
{
"docid": "ea8716e339cdc51210f64436a5c91c44",
"text": "Feature selection has been the focus of interest for quite some time and much work has been done. With the creation of huge databases and the consequent requirements for good machine learning techniques, new problems arise and novel approaches to feature selection are in demand. This survey is a comprehensive overview of many existing methods from the 1970’s to the present. It identifies four steps of a typical feature selection method, and categorizes the different existing methods in terms of generation procedures and evaluation functions, and reveals hitherto unattempted combinations of generation procedures and evaluation functions. Representative methods are chosen from each category for detailed explanation and discussion via example. Benchmark datasets with different characteristics are used for comparative study. The strengths and weaknesses of different methods are explained. Guidelines for applying feature selection methods are given based on data types and domain characteristics. This survey identifies the future research areas in feature selection, introduces newcomers to this field, and paves the way for practitioners who search for suitable methods for solving domain-specific real-world applications. (Intelligent Data Analysis, Vol. I, no. 3, http:llwwwelsevier.co&ocate/ida)",
"title": ""
}
] | [
{
"docid": "ebc77c29a8f761edb5e4ca588b2e6fb5",
"text": "Gigantomastia by definition means bilateral benign progressive breast enlargement to a degree that requires breast reduction surgery to remove more than 1800 g of tissue on each side. It is seen at puberty or during pregnancy. The etiology for this condition is still not clear, but surgery remains the mainstay of treatment. We present a unique case of Gigantomastia, which was neither related to puberty nor pregnancy and has undergone three operations so far for recurrence.",
"title": ""
},
{
"docid": "e6c1747e859f64517e7dddb6c1fd900e",
"text": "More and more mobile objects are now equipped with sensors allowing real time monitoring of their movements. Nowadays, the data produced by these sensors can be stored in spatio-temporal databases. The main goal of this article is to perform a data mining on a huge quantity of mobile object’s positions moving in an open space in order to deduce its behaviour. New tools must be defined to ease the detection of outliers. First of all, a zone graph is set up in order to define itineraries. Then, trajectories of mobile objects following the same itinerary are extracted from the spatio-temporal database and clustered. A statistical analysis on this set of trajectories lead to spatio-temporal patterns such as the main route and spatio-temporal channel followed by most of trajectories of the set. Using these patterns, unusual situations can be detected. Furthermore, a mobile object’s behaviour can be defined by comparing its positions with these spatio-temporal patterns. In this article, this technique is applied to ships’ movements in an open maritime area. Unusual behaviours such as being ahead of schedule or delayed or veering to the left or to the right of the main route are detected. A case study illustrates these processes based on ships’ positions recorded during two years around the Brest area. This method can be extended to almost all kinds of mobile objects (pedestrians, aircrafts, hurricanes, ...) moving in an open area.",
"title": ""
},
{
"docid": "4c12c08d72960b3b75662e9459e23079",
"text": "Graph structures play a critical role in computer vision, but they are inconvenient to use in pattern recognition tasks because of their combinatorial nature and the consequent difficulty in constructing feature vectors. Spectral representations have been used for this task which are based on the eigensystem of the graph Laplacian matrix. However, graphs of different sizes produce eigensystems of different sizes where not all eigenmodes are present in both graphs. We use the Levenshtein distance to compare spectral representations under graph edit operations which add or delete vertices. The spectral representations are therefore of different sizes. We use the concept of the string-edit distance to allow for the missing eigenmodes and compare the correct modes to each other. We evaluate the method by first using generated graphs to compare the effect of vertex deletion operations. We then examine the performance of the method on graphs from a shape database.",
"title": ""
},
{
"docid": "f3abf5a6c20b6fff4970e1e63c0e836b",
"text": "We demonstrate a physically-based technique for predicting the drape of a wide variety of woven fabrics. The approach exploits a theoretical model that explicitly represents the microstructure of woven cloth with interacting particles, rather than utilizing a continuum approximation. By testing a cloth sample in a Kawabata fabric testing device, we obtain data that is used to tune the model's energy functions, so that it reproduces the draping behavior of the original material. Photographs, comparing the drape of actual cloth with visualizations of simulation results, show that we are able to reliably model the unique large-scale draping characteristics of distinctly different fabric types.",
"title": ""
},
{
"docid": "209b304009db4a04400da178d19fe63e",
"text": "Mecanum wheels give vehicles and robots autonomous omni-directional capabilities, while regular wheels don’t. The omni-directionality that such wheels provide makes the vehicle extremely maneuverable, which could be very helpful in different indoor and outdoor applications. However, current Mecanum wheel designs can only operate on flat hard surfaces, and perform very poorly on rough terrains. This paper presents two modified Mecanum wheel designs targeted for complex rough terrains and discusses their advantages and disadvantages in comparison to regular Mecanum wheels. The wheels proposed here are particularly advantageous for overcoming obstacles up to 75% of the overall wheel diameter in lateral motion which significantly facilitates the lateral motion of vehicles on hard rough surfaces and soft soils such as sand which cannot be achieved using other types of wheels. The paper also presents control aspects that need to be considered when controlling autonomous vehicles/robots using the proposed wheels.",
"title": ""
},
{
"docid": "71140208f527bc0b1f193550a587d9ed",
"text": "Data sets are often modeled as point clouds in R, for D large. It is often assumed that the data has some interesting low-dimensional structure, for example that of a d-dimensional manifold M, with d much smaller than D. When M is simply a linear subspace, one may exploit this assumption for encoding efficiently the data by projecting onto a dictionary of d vectors in R (for example found by SVD), at a cost (n + D)d for n data points. When M is nonlinear, there are no “explicit” constructions of dictionaries that achieve a similar efficiency: typically one uses either random dictionaries, or dictionaries obtained by black-box optimization. In this paper we construct data-dependent multi-scale dictionaries that aim at efficient encoding and manipulating of the data. Their construction is fast, and so are the algorithms that map data points to dictionary coefficients and vice versa. In addition, data points are guaranteed to have a sparse representation in terms of the dictionary. We think of dictionaries as the analogue of wavelets, but for approximating point clouds rather than functions.",
"title": ""
},
{
"docid": "1c9dd9b98b141e87ca7b74e995630456",
"text": "Transportation systems in mega-cities are often affected by various kinds of events such as natural disasters, accidents, and public gatherings. Highly dense and complicated networks in the transportation systems propagate confusion in the network because they offer various possible transfer routes to passengers. Visualization is one of the most important techniques for examining such cascades of unusual situations in the huge networks. This paper proposes visual integration of traffic analysis and social media analysis using two forms of big data: smart card data on the Tokyo Metro and social media data on Twitter. Our system provides multiple coordinated views to visually, intuitively, and simultaneously explore changes in passengers' behavior and abnormal situations extracted from smart card data and situational explanations from real voices of passengers such as complaints about services extracted from social media data. We demonstrate the possibilities and usefulness of our novel visualization environment using a series of real data case studies and domain experts' feedbacks about various kinds of events.",
"title": ""
},
{
"docid": "a8c1224f291df5aeb655a2883b16bcfb",
"text": "We present a scalable approach to automatically suggest relevant clothing products, given a single image without metadata. We formulate the problem as cross-scenario retrieval: the query is a real-world image, while the products from online shopping catalogs are usually presented in a clean environment. We divide our approach into two main stages: a) Starting from articulated pose estimation, we segment the person area and cluster promising image regions in order to detect the clothing classes present in the query image. b) We use image retrieval techniques to retrieve visually similar products from each of the detected classes. We achieve clothing detection performance comparable to the state-of-the-art on a very recent annotated dataset, while being more than 50 times faster. Finally, we present a large scale clothing suggestion scenario, where the product database contains over one million products.",
"title": ""
},
{
"docid": "4f7fcd45fa7b27dd3cf25ed441b7d527",
"text": "Forecasting financial time-series has long been among the most challenging problems in financial market analysis. In order to recognize the correct circumstances to enter or exit the markets investors usually employ statistical models (or even simple qualitative methods). However, the inherently noisy and stochastic nature of markets severely limits the forecasting accuracy of the used models. The introduction of electronic trading and the availability of large amounts of data allow for developing novel machine learning techniques that address some of the difficulties faced by the aforementioned methods. In this work we propose a deep learning methodology, based on recurrent neural networks, that can be used for predicting future price movements from large-scale high-frequency time-series data on Limit Order Books. The proposed method is evaluated using a large-scale dataset of limit order book events.",
"title": ""
},
{
"docid": "c26f27dd49598b7f9120f9a31dccb012",
"text": "The effects of music training in relation to brain plasticity have caused excitement, evident from the popularity of books on this topic among scientists and the general public. Neuroscience research has shown that music training leads to changes throughout the auditory system that prime musicians for listening challenges beyond music processing. This effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness. Therefore, the role of music in shaping individual development deserves consideration.",
"title": ""
},
{
"docid": "f0284358cf418353b5b46d73bd887c77",
"text": "BACKGROUND\nSevere acute malnutrition has continued to be growing problem in Sub Saharan Africa. We investigated the factors associated with morbidity and mortality of under-five children admitted and managed in hospital for severe acute malnutrition.\n\n\nMETHODS\nIt was a retrospective quantitative review of hospital based records using patient files, ward death and discharge registers. It was conducted focussing on demographic, clinical and mortality data which was extracted on all children aged 0-60 months admitted to the University Teaching Hospital in Zambia from 2009 to 2013. Cox proportional Hazards regression was used to identify predictors of mortality and Kaplan Meier curves where used to predict the length of stay on the ward.\n\n\nRESULTS\nOverall (n = 9540) under-five children with severe acute malnutrition were admitted during the period under review, comprising 5148 (54%) males and 4386 (46%) females. Kwashiorkor was the most common type of severe acute malnutrition (62%) while diarrhoea and pneumonia were the most common co-morbidities. Overall mortality was at 46% with children with marasmus having the lowest survival rates on Kaplan Meier graphs. HIV infected children were 80% more likely to die compared to HIV uninfected children (HR = 1.8; 95%CI: 1.6-1.2). However, over time (2009-2013), admissions and mortality rates declined significantly (mortality 51% vs. 35%, P < 0.0001).\n\n\nCONCLUSIONS\nWe find evidence of declining mortality among the core morbid nutritional conditions, namely kwashiorkor, marasmus and marasmic-kwashiorkor among under-five children admitted at this hospital. The reasons for this are unclear or could be beyond the scope of this study. This decline in numbers could be either be associated with declining admissions or due to the interventions that have been implemented at community level to combat malnutrition such as provision of \"Ready to Use therapeutic food\" and prevention of mother to child transmission of HIV at health centre level. Strategies that enhance and expand growth monitoring interventions at community level to detect malnutrition early to reduce incidence of severe cases and mortality need to be strengthened.",
"title": ""
},
{
"docid": "dc34a320af0e7a104686a36f7a6101c3",
"text": "In this paper, the proposed SIMO (Single input multiple outputs) DC-DC converter based on coupled inductor. The required controllable high DC voltage and intermediate DC voltage with high voltage gain from low input voltage sources, like renewable energy, can be achieved easily from the proposed converter. The high voltage DC bus can be used as the leading power for a DC load and intermediate voltage DC output terminals can charge supplementary power sources like battery modules. This converter operates simply with one power switch. It incorporates the techniques of voltage clamping (VC) and zero current switching (ZCS). The simulation result in PSIM software shows that the aims of high efficiency, high voltage gain, several output voltages with unlike levels, are achieved.",
"title": ""
},
{
"docid": "3c0edb8ae2cf8ef616a500ec9f3ceb52",
"text": "In his book Outliers, Malcom Gladwell describes the 10,000-Hour Rule, a key to success in any field, as simply a matter of practicing a specific task that can be accomplished with 20 hours of work a week for 10 years [10]. Ongoing changes in technology and national security needs require aspiring excellent cybersecurity professionals to set a goal of 10,000 hours of relevant, hands-on skill development. The education system today is ill prepared to meet the challenge of producing an adequate number of cybersecurity professionals, but programs that use competitions and learning environments that teach depth are filling this void.",
"title": ""
},
{
"docid": "e793b233039c9cb105fa311fa08312cd",
"text": "A generalized single-phase multilevel current source inverter (MCSI) topology with self-balancing current is proposed, which uses the duality transformation from the generalized multilevel voltage source inverter (MVSI) topology. The existing single-phase 8- and 6-switch 5-level current source inverters (CSIs) can be derived from this generalized MCSI topology. In the proposed topology, each intermediate DC-link current level can be balanced automatically without adding any external circuits; thus, a true multilevel structure is provided. Moreover, owing to the dual relationship, many research results relating to the operation, modulation, and control strategies of MVSIs can be applied directly to the MCSIs. Some simulation results are presented to verify the proposed MCSI topology.",
"title": ""
},
{
"docid": "08d59866cf8496573707d46a6cb520d4",
"text": "Healthcare is an integral component in people's lives, especially for the rising elderly population. Medicare is one such healthcare program that provides for the needs of the elderly. It is imperative that these healthcare programs are affordable, but this is not always the case. Out of the many possible factors for the rising cost of healthcare, claims fraud is a major contributor, but its impact can be lessened through effective fraud detection. We propose a general outlier detection model, based on Bayesian inference, using probabilistic programming. Our model provides probability distributions rather than just point values, as with most common outlier detection methods. Credible intervals are also generated to further enhance confidence that the detected outliers should in fact be considered outliers. Two case studies are presented demonstrating our model's effectiveness in detecting outliers. The first case study uses temperature data in order to provide a clear comparison of several outlier detection techniques. The second case study uses a Medicare dataset to showcase our proposed outlier detection model. Our results show that the successful detection of outliers, which indicate possible fraudulent activities, can provide effective and meaningful results for further investigation within medical specialties or by using real-world, medical provider fraud investigation cases.",
"title": ""
},
{
"docid": "ecf2b2d6a951d84aad15321f029fd014",
"text": "This paper reports the design principles and evaluation results of a new experimental hybrid intrusion detection system (HIDS). This hybrid system combines the advantages of low false-positive rate of signature-based intrusion detection system (IDS) and the ability of anomaly detection system (ADS) to detect novel unknown attacks. By mining anomalous traffic episodes from Internet connections, we build an ADS that detects anomalies beyond the capabilities of signature-based SNORT or Bro systems. A weighted signature generation scheme is developed to integrate ADS with SNORT by extracting signatures from anomalies detected. HIDS extracts signatures from the output of ADS and adds them into the SNORT signature database for fast and accurate intrusion detection. By testing our HIDS scheme over real-life Internet trace data mixed with 10 days of Massachusetts Institute of Technology/Lincoln Laboratory (MIT/LL) attack data set, our experimental results show a 60 percent detection rate of the HIDS, compared with 30 percent and 22 percent in using the SNORT and Bro systems, respectively. This sharp increase in detection rate is obtained with less than 3 percent false alarms. The signatures generated by ADS upgrade the SNORT performance by 33 percent. The HIDS approach proves the vitality of detecting intrusions and anomalies, simultaneously, by automated data mining and signature generation over Internet connection episodes",
"title": ""
},
{
"docid": "5ee5f4450ecc89b684e90e7b846f8365",
"text": "This study scrutinizes the predictive relationship between three referral channels, search engine, social medial, and third-party advertising, and online consumer search and purchase. The results derived from vector autoregressive models suggest that the three channels have differential predictive relationship with sale measures. The predictive power of the three channels is also considerably different in referring customers among competing online shopping websites. In the short run, referrals from all three channels have a significantly positive predictive relationship with the focal online store’s sales amount and volume, but having no significant relationship with conversion. Only referrals from search engines to the rival website have a significantly negative predictive relationship with the focal website’s sales and volume. In the long run, referrals from all three channels have a significant positive predictive relationship with the focal website’s sales, conversion and sales volume. In contrast, referrals from all three channels to the competing online stores have a significant negative predictive relationship with the focal website’s sales, conversion and sales volume. Our results also show that search engine referrals explains the most of the variance in sales, while social media referrals explains the most of the variance in conversion and third party ads referrals explains the most of the variance in sales volume. This study offers new insights for IT and marketing practitioners in respect to better and deeper understanding on marketing attribution and how different channels perform in order to optimize the media mix and overall performance.",
"title": ""
},
{
"docid": "bd3792071a2c7b13bf479aa138f67544",
"text": "Aging is considered the major risk factor for cancer, one of the most important mortality causes in the western world. Inflammaging, a state of chronic, low-level systemic inflammation, is a pervasive feature of human aging. Chronic inflammation increases cancer risk and affects all cancer stages, triggering the initial genetic mutation or epigenetic mechanism, promoting cancer initiation, progression and metastatic diffusion. Thus, inflammaging is a strong candidate to connect age and cancer. A corollary of this hypothesis is that interventions aiming to decrease inflammaging should protect against cancer, as well as most/all age-related diseases. Epidemiological data are concordant in suggesting that the Mediterranean Diet (MD) decreases the risk of a variety of cancers but the underpinning mechanism(s) is (are) still unclear. Here we review data indicating that the MD (as a whole diet or single bioactive nutrients typical of the MD) modulates multiple interconnected processes involved in carcinogenesis and inflammatory response such as free radical production, NF-κB activation and expression of inflammatory mediators, and the eicosanoids pathway. Particular attention is devoted to the capability of MD to affect the balance between pro- and anti-inflammaging as well as to emerging topics such as maintenance of gut microbiota (GM) homeostasis and epigenetic modulation of oncogenesis through specific microRNAs.",
"title": ""
},
{
"docid": "f856effa28ba7b60a5a2a4bba06ba2c4",
"text": "Entity synonyms are critical for many applications like information retrieval and named entity recognition in documents. The current trend is to automatically discover entity synonyms using statistical techniques on web data. Prior techniques suffer from several limitations like click log sparsity and inability to distinguish between entities of different concept classes. In this paper, we propose a general framework for robustly discovering entity synonym with two novel similarity functions that overcome the limitations of prior techniques. We develop efficient and scalable techniques leveraging the MapReduce framework to discover synonyms at large scale. To handle long entity names with extraneous tokens, we propose techniques to effectively map long entity names to short queries in query log. Our experiments on real data from different entity domains demonstrate the superior quality of our synonyms as well as the efficiency of our algorithms. The entity synonyms produced by our system is in production in Bing Shopping and Video search, with experiments showing the significance it brings in improving search experience.",
"title": ""
}
] | scidocsrr |
bf600914fd5e039942734dd724c518f0 | Hypnosis and Mindfulness: The Twain Finally Meet. | [
{
"docid": "0b096c5cf5bac921c0e81a30c6a482a4",
"text": "OBJECTIVE\nTo provide a comprehensive review and evaluation of the psychological and neurophysiological literature pertaining to mindfulness meditation.\n\n\nMETHODS\nA search for papers in English was undertaken using PsycINFO (from 1804 onward), MedLine (from 1966 onward) and the Cochrane Library with the following search terms: Vipassana, Mindfulness, Meditation, Zen, Insight, EEG, ERP, fMRI, neuroimaging and intervention. In addition, retrieved papers and reports known to the authors were also reviewed for additional relevant literature.\n\n\nRESULTS\nMindfulness-based therapeutic interventions appear to be effective in the treatment of depression, anxiety, psychosis, borderline personality disorder and suicidal/self-harm behaviour. Mindfulness meditation per se is effective in reducing substance use and recidivism rates in incarcerated populations but has not been specifically investigated in populations with psychiatric disorders. Electroencephalography research suggests increased alpha, theta and beta activity in frontal and posterior regions, some gamma band effects, with theta activity strongly related to level of experience of meditation; however, these findings have not been consistent. The few neuroimaging studies that have been conducted suggest volumetric and functional change in key brain regions.\n\n\nCONCLUSIONS\nPreliminary findings from treatment outcome studies provide support for the application of mindfulness-based interventions in the treatment of affective, anxiety and personality disorders. However, direct evidence for the effectiveness of mindfulness meditation per se in the treatment of psychiatric disorders is needed. Current neurophysiological and imaging research findings have identified neural changes in association with meditation and provide a potentially promising avenue for future research.",
"title": ""
}
] | [
{
"docid": "f8a1ba148f564f9dcc0c57873bb5ce60",
"text": "Advances in online technologies have raised new concerns about privacy. A sample of expert household end users was surveyed concerning privacy, risk perceptions, and online behavior intentions. A new e-privacy typology consisting of privacyaware, privacy-suspicious, and privacy-active types was developed from a principal component factor analysis. Results suggest the presence of a privacy hierarchy of effects where awareness leads to suspicion, which subsequently leads to active behavior. An important finding was that privacy-active behavior that was hypothesized to increase the likelihood of online subscription and purchasing was not found to be significant. A further finding was that perceived risk had a strong negative influence on the extent to which respondents participated in online subscription and purchasing. Based on these results, a number of implications for managers and directions for future research are discussed.",
"title": ""
},
{
"docid": "fc26f9bcbd28125607c90e15c3069cab",
"text": "Topological data analysis (TDA) is an emerging mathematical concept for characterizing shapes in complex data. In TDA, persistence diagrams are widely recognized as a useful descriptor of data, and can distinguish robust and noisy topological properties. This paper proposes a kernel method on persistence diagrams to develop a statistical framework in TDA. The proposed kernel satisfies the stability property and provides explicit control on the effect of persistence. Furthermore, the method allows a fast approximation technique. The method is applied into practical data on proteins and oxide glasses, and the results show the advantage of our method compared to other relevant methods on persistence diagrams.",
"title": ""
},
{
"docid": "41ec184d686b2ff1ffdabb8e4c24a6e9",
"text": "In this paper, we present a three-stage method for the estimation of the color of the illuminant in RAW images. The first stage uses a convolutional neural network that has been specially designed to produce multiple local estimates of the illuminant. The second stage, given the local estimates, determines the number of illuminants in the scene. Finally, local illuminant estimates are refined by non-linear local aggregation, resulting in a global estimate in case of single illuminant. An extensive comparison with both local and global illuminant estimation methods in the state of the art, on standard data sets with single and multiple illuminants, proves the effectiveness of our method.",
"title": ""
},
{
"docid": "4337803c5834dc98da0af2141293bb1b",
"text": "This paper addresses the joint design of transmit and receive beamforming or linear processing (commonly termed linear precoding at the transmitter and equalization at the receiver) for multicarrier multi-input multi-output (MIMO) channels under a variety of design criteria. Instead of considering each design criterion in a separate way, we generalize the existing results by developing a unified framework based on considering two families of objective functions that embrace most reasonable criteria to design a communication system: Schur-concave and Schur-convex functions. Once the optimal structure of the transmit-receive processing is known, the design problem simplifies and can be formulated within the powerful framework of convex optimization theory, in which a great number of interesting design criteria can be easily accommodated and efficiently solved even though closed-form expressions may not exist. From this perspective, we analyze a variety of design criteria and, in particular, we derive optimal beamvectors in the sense of having minimum average bit error rate (BER). Additional constraints on the Peak-to-Average Ratio (PAR) or on the signal dynamic range are easily included in the design. We propose two multi-level water-filling practical solutions that perform very close to the optimal in terms of average BER with a low implementation complexity. If cooperation among the processing operating at different carriers is allowed, the performance improves significantly. Interestingly, with carrier cooperation, it turns out that the exact optimal solution in terms of average BER can be obtained in closed-form. Manuscript received February 25, 2002; revised December 20, 2002. Part of the work was presented at the 40th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, Oct. 2002 [37] . This work was partially supported by the European Comission under Project IST-2000-30148 I-METRA; Samsung Advanced Institute of Technology; the Spanish Government (CICYT) TIC2000-1025, TIC2001-2356, TIC2002-04594, FIT-070000-2000-649 (MEDEA + A105 UniLAN); and the Catalan Government (DURSI) 1999FI 00588, 2001SGR 00268.",
"title": ""
},
{
"docid": "843aa1e751391fb740571c08de46d2ca",
"text": "Antineutrophil cytoplasm antibody (ANCA)-associated vasculitides are small-vessel vasculitides that include granulomatosis with polyangiitis (formerly Wegener's granulomatosis), microscopic polyangiitis, and eosinophilic granulomatosis with polyangiitis (Churg-Strauss syndrome). Renal-limited ANCA-associated vasculitides can be considered the fourth entity. Despite their rarity and still unknown cause(s), research pertaining to ANCA-associated vasculitides has been very active over the past decades. The pathogenic role of antimyeloperoxidase ANCA (MPO-ANCA) has been supported using several animal models, but that of antiproteinase 3 ANCA (PR3-ANCA) has not been as strongly demonstrated. Moreover, some MPO-ANCA subsets, which are directed against a few specific MPO epitopes, have recently been found to be better associated with disease activity, but a different method than the one presently used in routine detection is required to detect them. B cells possibly play a major role in the pathogenesis because they produce ANCAs, as well as neutrophil abnormalities and imbalances in different T-cell subtypes [T helper (Th)1, Th2, Th17, regulatory cluster of differentiation (CD)4+ CD25+ forkhead box P3 (FoxP3)+ T cells] and/or cytokine-chemokine networks. The alternative complement pathway is also involved, and its blockade has been shown to prevent renal disease in an MPO-ANCA murine model. Other recent studies suggested strongest genetic associations by ANCA type rather than by clinical diagnosis. The induction treatment for severe granulomatosis with polyangiitis and microscopic polyangiitis is relatively well codified but does not (yet) really differ by precise diagnosis or ANCA type. It comprises glucocorticoids combined with another immunosuppressant, cyclophosphamide or rituximab. The choice between the two immunosuppressants must consider the comorbidities, past exposure to cyclophosphamide for relapsers, plans for pregnancy, and also the cost of rituximab. Once remission is achieved, maintenance strategy following cyclophosphamide-based induction relies on less toxic agents such as azathioprine or methotrexate. The optimal maintenance strategy following rituximab-based induction therapy remains to be determined. Preliminary results on rituximab for maintenance therapy appear promising. Efforts are still under way to determine the optimal duration of maintenance therapy, ideally tailored according to the characteristics of each patient and the previous treatment received.",
"title": ""
},
{
"docid": "359d76f0b4f758c3a58e886e840c5361",
"text": "Cover crops are important components of sustainable agricultural systems. They increase surface residue and aid in the reduction of soil erosion. They improve the structure and water-holding capacity of the soil and thus increase the effectiveness of applied N fertilizer. Legume cover crops such as hairy vetch and crimson clover fix nitrogen and contribute to the nitrogen requirements of subsequent crops. Cover crops can also suppress weeds, provide suitable habitat for beneficial predator insects, and act as non-host crops for nematodes and other pests in crop rotations. This paper reviews the agronomic and economic literature on using cover crops in sustainable food production and reports on past and present research on cover crops and sustainable agriculture at the Beltsville Agricultural Research Center, Maryland. Previous studies suggested that the profitability of cover crops is primarily the result of enhanced crop yields rather than reduced input costs. The experiments at the Beltsville Agricultural Research Center on fresh-market tomato production showed that tomatoes grown with hairy vetch mulch were higher yielding and more profitable than those grown with black polyethylene and no mulch system. Previous studies of cover crops in grain production indicated that legume cover crops such as hairy vetch and crimson clover are more profitable than grass cover crops such as rye or wheat because of the ability of legumes to contribute N to the following crop. A com-",
"title": ""
},
{
"docid": "49c7b5cab51301d8b921fa87d6c0b1ff",
"text": "We introduce the input output automa ton a simple but powerful model of computation in asynchronous distributed networks With this model we are able to construct modular hierarchical correct ness proofs for distributed algorithms We de ne this model and give an interesting example of how it can be used to construct such proofs",
"title": ""
},
{
"docid": "b36058bcfcb5f5f4084fe131c42b13d9",
"text": "We present regular linear temporal logic (RLTL), a logic that generalizes linear temporal logic with the ability to use regular expressions arbitrarily as sub-expressions. Every LTL operator can be defined as a context in regular linear temporal logic. This implies that there is a (linear) translation from LTL to RLTL. Unlike LTL, regular linear temporal logic can define all ω-regular languages, while still keeping the satisfiability problem in PSPACE. Unlike the extended temporal logics ETL∗, RLTL is defined with an algebraic signature. In contrast to the linear time μ-calculus, RLTL does not depend on fix-points in its syntax.",
"title": ""
},
{
"docid": "814e593fac017e5605c4992ef7b25d6d",
"text": "This paper discusses the design of high power density transformer and inductor for the high frequency dual active bridge (DAB) GaN charger. Because the charger operates at 500 kHz, the inductance needed to achieve ZVS for the DAB converter is reduced to as low as 3μH. As a result, it is possible to utilize the leakage inductor as the series inductor of DAB converter. To create such amount of leakage inductance, certain space between primary and secondary winding is allocated to store the leakage flux energy. The designed transformer is above 99.2% efficiency while delivering 3.3kW. The power density of the designed transformer is 6.3 times of the lumped transformer and inductor in 50 kHz Si Charger. The detailed design procedure and loss analysis are discussed.",
"title": ""
},
{
"docid": "f16676f00cd50173d75bd61936ec200c",
"text": "Training of the neural autoregressive density estimator (NADE) can be viewed as doing one step of probabilistic inference on missing values in data. We propose a new model that extends this inference scheme to multiple steps, arguing that it is easier to learn to improve a reconstruction in k steps rather than to learn to reconstruct in a single inference step. The proposed model is an unsupervised building block for deep learning that combines the desirable properties of NADE and multi-prediction training: (1) Its test likelihood can be computed analytically, (2) it is easy to generate independent samples from it, and (3) it uses an inference engine that is a superset of variational inference for Boltzmann machines. The proposed NADE-k is competitive with the state-of-the-art in density estimation on the two datasets tested.",
"title": ""
},
{
"docid": "24e54cbc2c419de1d2d56e64eb428004",
"text": "Internet of Things has become a predominant phenomenon in every sphere of smart life. Connected Cars and Vehicular Internet of Things, which involves communication and data exchange between vehicles, traffic infrastructure or other entities are pivotal to realize the vision of smart city and intelligent transportation. Vehicular Cloud offers a promising architecture wherein storage and processing capabilities of smart objects are utilized to provide on-the-fly fog platform. Researchers have demonstrated vulnerabilities in this emerging vehicular IoT ecosystem, where data has been stolen from critical sensors and smart vehicles controlled remotely. Security and privacy is important in Internet of Vehicles (IoV) where access to electronic control units, applications and data in connected cars should only be authorized to legitimate users, sensors or vehicles. In this paper, we propose an authorization framework to secure this dynamic system where interactions among entities is not pre-defined. We provide an extended access control oriented (E-ACO) architecture relevant to IoV and discuss the need of vehicular clouds in this time and location sensitive environment. We outline approaches to different access control models which can be enforced at various layers of E-ACO architecture and in the authorization framework. Finally, we discuss use cases to illustrate access control requirements in our vision of cloud assisted connected cars and vehicular IoT, and discuss possible research directions.",
"title": ""
},
{
"docid": "dc71b53847d33e82c53f0b288da89bfa",
"text": "We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.",
"title": ""
},
{
"docid": "0fbf7e46689102a4dd031eb54e6c083c",
"text": "The analyzing and extracting important information from a text document is crucial and has produced interest in the area of text mining and information retrieval. This process is used in order to notice particularly in the text. Furthermore, on view of the readers that people tend to read almost everything in text documents to find some specific information. However, reading a text document consumes time to complete and additional time to extract information. Thus, classifying text to a subject can guide a person to find relevant information. In this paper, a subject identification method which is based on term frequency to categorize groups of text into a particular subject is proposed. Since term frequency tends to ignore the semantics of a document, the term extraction algorithm is introduced for improving the result of the extracted relevant terms from the text. The evaluation of the extracted terms has shown that the proposed method is exceeded other extraction techniques.",
"title": ""
},
{
"docid": "bb19e6b00fca27c455316f09a626407c",
"text": "On the basis of the most recent epidemiologic research, Autism Spectrum Disorder (ASD) affects approximately 1% to 2% of all children. (1)(2) On the basis of some research evidence and consensus, the Modified Checklist for Autism in Toddlers isa helpful tool to screen for autism in children between ages 16 and 30 months. (11) The Diagnostic Statistical Manual of Mental Disorders, Fourth Edition, changes to a 2-symptom category from a 3-symptom category in the Diagnostic Statistical Manual of Mental Disorders, Fifth Edition(DSM-5): deficits in social communication and social interaction are combined with repetitive and restrictive behaviors, and more criteria are required per category. The DSM-5 subsumes all the previous diagnoses of autism (classic autism, Asperger syndrome, and pervasive developmental disorder not otherwise specified) into just ASDs. On the basis of moderate to strong evidence, the use of applied behavioral analysis and intensive behavioral programs has a beneficial effect on language and the core deficits of children with autism. (16) Currently, minimal or no evidence is available to endorse most complementary and alternative medicine therapies used by parents, such as dietary changes (gluten free), vitamins, chelation, and hyperbaric oxygen. (16) On the basis of consensus and some studies, pediatric clinicians should improve their capacity to provide children with ASD a medical home that is accessible and provides family-centered, continuous, comprehensive and coordinated, compassionate, and culturally sensitive care. (20)",
"title": ""
},
{
"docid": "daf997a64778e0e2d5fc1a07ad69b0e4",
"text": "A soft-switching single-ended primary inductor converter (SEPIC) is presented in this paper. An auxiliary switch and a clamp capacitor are connected. A coupled inductor and an auxiliary inductor are utilized to obtain ripple-free input current and achieve zero-voltage-switching (ZVS) operation of the main and auxiliary switches. The voltage multiplier technique and active clamp technique are applied to the conventional SEPIC converter to increase the voltage gain, reduce the voltage stresses of the power switches and diode. Moreover, by utilizing the resonance between the resonant inductor and the capacitor in the voltage multiplier circuit, the zero-current-switching operation of the output diode is achieved and its reverse-recovery loss is significantly reduced. The proposed converter achieves high efficiency due to soft-switching commutations of the power semiconductor devices. The presented theoretical analysis is verified by a prototype of 100 kHz and 80 W converter. Also, the measured efficiency of the proposed converter has reached a value of 94.8% at the maximum output power.",
"title": ""
},
{
"docid": "76375aa50ebe8388d653241ba481ecd2",
"text": "Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting. Generative models have been explored as a means to approximate the distribution of old tasks and bypass storage of real data. Here we propose a cumulative closed-loop generator and embedded classifier using an AC-GAN architecture provided with external regularization by a small buffer. We evaluate incremental learning using a notoriously hard paradigm, “single headed learning,” in which each task is a disjoint subset of classes in the overall dataset, and performance is evaluated on all previous classes. First, we show that the variability contained in a small percentage of a dataset (memory buffer) accounts for a significant portion of the reported accuracy, both in multi-task and continual learning settings. Second, we show that using a generator to continuously output new images while training provides an up-sampling of the buffer, which prevents catastrophic forgetting and yields superior performance when compared to a fixed buffer. We achieve an average accuracy for all classes of 92.26% in MNIST and 76.15% in FASHION-MNIST after 5 tasks using GAN sampling with a buffer of only 0.17% of the entire dataset size. We compare to a network with regularization (EWC) which shows a deteriorated average performance of 29.19% (MNIST) and 26.5% (FASHION). The baseline of no regularization (plain gradient descent) performs at 99.84% (MNIST) and 99.79% (FASHION) for the last task, but below 3% for all previous tasks. Our method has very low long-term memory cost, the buffer, as well as negligible intermediate memory storage.",
"title": ""
},
{
"docid": "1bdf406fd827af2dddcecef934e291d4",
"text": "This study was conducted to collect data on specific volatile fatty acids (produced from soft tissue decomposition) and various anions and cations (liberated from soft tissue and bone), deposited in soil solution underneath decomposing human cadavers as an aid in determining the \"time since death.\" Seven nude subjects (two black males, a white female and four white males) were placed within a decay research facility at various times of the year and allowed to decompose naturally. Data were amassed every three days in the spring and summer, and weekly in the fall and winter. Analyses of the data reveal distinct patterns in the soil solution for volatile fatty acids during soft tissue decomposition and for specific anions and cations once skeletonized, when based on accumulated degree days. Decompositional rates were also obtained, providing valuable information for estimating the \"maximum time since death.\" Melanin concentrations observed in soil solution during this study also yields information directed at discerning racial affinities. Application of these data can significantly enhance \"time since death\" determinations currently in use.",
"title": ""
},
{
"docid": "90558e7b7d2a5fbc76fe3d2c824289b0",
"text": "This paper deals with a 3 dB Ku-band coupler designed in substrate integrated waveguide (SIW) technology. A microstrip-SIW-transition is designed with a return loss (RL) greater than 20 dB. Rogers 4003 substrate is used for the SIW with a gold plated copper metallisation. The coupler achieves a relative bandwidth of 26.1% with an insertion loss (IL) lower than 2 dB, coupling balance smaller than 0.5 dB and RL and isolation greater than 15 dB.",
"title": ""
},
{
"docid": "120e36cc162f4ce602da810c80c18c7d",
"text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.",
"title": ""
},
{
"docid": "7aaee14383fc247165017345ab2927a8",
"text": "The goals of the paper are as follows: i) review some qualitative properties of oil and gas prices in the last 15 years; ii) propose some mathematical elements towards a definition of mean reversion that would not be reduced to the form of the drift in a stochastic differential equation; iii) conduct econometric tests in order to conclude whether mean reversion still exists in the energy commodity price behavior. Regarding the third point, a clear “break” in the properties of oil and natural gas prices and volatility can be exhibited in the period 2000-2001.",
"title": ""
}
] | scidocsrr |
1d27839b112aeb226d6897c9f2819d5f | Interpretable 3D Human Action Analysis with Temporal Convolutional Networks | [
{
"docid": "2e8251644f82f3a965cf6360416eaaaa",
"text": "The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) \"actions\" and 2) \"activities.\" \"Actions\" are characterized by simple motion patterns typically executed by a single human. \"Activities\" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.",
"title": ""
},
{
"docid": "e41680f7ade6fa91d275e5e5137b4750",
"text": "The ability to identify and temporally segment fine-grained human actions throughout a video is crucial for robotics, surveillance, education, and beyond. Typical approaches decouple this problem by first extracting local spatiotemporal features from video frames and then feeding them into a temporal classifier that captures high-level temporal patterns. We describe a class of temporal models, which we call Temporal Convolutional Networks (TCNs), that use a hierarchy of temporal convolutions to perform fine-grained action segmentation or detection. Our Encoder-Decoder TCN uses pooling and upsampling to efficiently capture long-range temporal patterns whereas our Dilated TCN uses dilated convolutions. We show that TCNs are capable of capturing action compositions, segment durations, and long-range dependencies, and are over a magnitude faster to train than competing LSTM-based Recurrent Neural Networks. We apply these models to three challenging fine-grained datasets and show large improvements over the state of the art.",
"title": ""
},
{
"docid": "71b5c8679979cccfe9cad229d4b7a952",
"text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"title": ""
}
] | [
{
"docid": "a7db9f3f1bb5883f6a5a873dd661867b",
"text": "Psychologists and sociologists usually interpret happiness scores as cardinal and comparable across respondents, and thus run OLS regressions on happiness and changes in happiness. Economists usually assume only ordinality and have mainly used ordered latent response models, thereby not taking satisfactory account of fixed individual traits. We address this problem by developing a conditional estimator for the fixed-effect ordered logit model. We find that assuming ordinality or cardinality of happiness scores makes little difference, whilst allowing for fixed-effects does change results substantially. We call for more research into the determinants of the personality traits making up these fixed-effects.",
"title": ""
},
{
"docid": "2e82b7c84ed48dbeb95564eb1c63ecb6",
"text": "Received Nov 26, 2017 Revised Jan 22, 2018 Accepted Feb 25, 2018 This paper presents the simulation of the control of doubly star induction motor using Direct Torque Control (DTC) based on Proportional and Integral controller (PI) and Fuzzy Logic Controller (FLC). In addition, the work describes a model of doubly star induction motor in α-β reference frame theory and its computer simulation in MATLAB/SIMULINK®.The structure of the DTC has several advantages such as the short sampling time required by the TC schemes makes them suited to a very fast flux and torque controlled drives as well as the simplicity of the control algorithm.the generalpurpose induction drives in very wide range using DTC because it is the excellent solution. The performances of the DTC with a PI controller and FLC are tested under differents speeds command values and load torque. Keyword:",
"title": ""
},
{
"docid": "bef3c65efd72249fb3d668438f9961e5",
"text": "This study investigates the effects of earthquake types, magnitudes, and hysteretic behavior on the peak and residual ductility demands of inelastic single-degree-of-freedom systems and evaluates the effects of major aftershocks on the non-linear structural responses. An extensive dataset of real mainshock–aftershock sequences for Japanese earthquakes is developed. The constructed dataset is large, compared with previous datasets of similar kinds, and includes numerous sequences from the 2011 Tohoku earthquake, facilitating an investigation of spatial aspects of the aftershock effects. The empirical assessment of peak and residual ductility demands of numerous inelastic systems having different vibration periods, yield strengths, and hysteretic characteristics indicates that the increase in seismic demand measures due to aftershocks occurs rarely but can be significant. For a large mega-thrust subduction earthquake, a critical factor for major aftershock damage is the spatial occurrence process of aftershocks.",
"title": ""
},
{
"docid": "35aa75f5bd79c8d97e374c33f5bad615",
"text": "Historically, much attention has been given to the unit processes and the integration of those unit processes to improve product yield. Less attention has been given to the wafer environment, either during or post processing. This paper contains a detailed discussion on how particles and Airborne Molecular Contaminants (AMCs) from the wafer environment interact and produce undesired effects on the wafer. Sources of wafer environmental contamination are the process itself, ambient environment, outgassing from wafers, and FOUP contamination. Establishing a strategy that reduces contamination inside the FOUP will increase yield and decrease defect variability. Three primary variables that greatly impact this strategy are FOUP contamination mitigation, FOUP material, and FOUP metrology and cleaning method.",
"title": ""
},
{
"docid": "15bf072dd0195fa8a9eb19fb82862a4e",
"text": "Recent developments in Graphics Processing Units (GPUs) have enabled inexpensive high performance computing for general-purpose applications. Due to GPU's tremendous computing capability, it has emerged as the co-processor of the CPU to achieve a high overall throughput. CUDA programming model provides the programmers adequate C language like APIs to better exploit the parallel power of the GPU. K-nearest neighbor (KNN) is a widely used classification technique and has significant applications in various domains, especially in text classification. The computational-intensive nature of KNN requires a high performance implementation. In this paper, we present a CUDA-based parallel implementation of KNN, CUKNN, using CUDA multi-thread model, where the data elements are processed in a data-parallel fashion. Various CUDA optimization techniques are applied to maximize the utilization of the GPU. CUKNN outperforms the serial KNN on an HP xw8600 workstation significantly, achieving up to 46.71X speedup including I/O time. It also shows good scalability when varying the dimension of the reference dataset, the number of records in the reference dataset, and the number of records in the query dataset.",
"title": ""
},
{
"docid": "8231e10912b42e0f8ac90392e6e0efbb",
"text": "Zobrist Hashing: An Efficient Work Distribution Method for Parallel Best-First Search Yuu Jinnai, Alex Fukunaga VIS: Text and Vision Oral Presentations 1326 SentiCap: Generating Image Descriptions with Sentiments Alexander Patrick Mathews, Lexing Xie, Xuming He 1950 Reading Scene Text in Deep Convolutional Sequences Pan He, Weilin Huang, Yu Qiao, Chen Change Loy, Xiaoou Tang 1247 Creating Images by Learning Image Semantics Using Vector Space Models Derrall Heath, Dan Ventura Poster Spotlight Talks 655 Towards Domain Adaptive Vehicle Detection in Satellite Image by Supervised SuperResolution Transfer Liujuan Cao, Rongrong Ji, Cheng Wang, Jonathan Li 499 Transductive Zero-Shot Recognition via Shared Model Space Learning Yuchen Guo, Guiguang Ding, Xiaoming Jin, Jianmin Wang 1255 Exploiting View-Specific Appearance Similarities Across Classes for Zero-shot Pose Prediction: A Metric Learning Approach Alina Kuznetsova, Sung Ju Hwang, Bodo Rosenhahn, Leonid Sigal NLP: Topic Flow Oral Presentations 744 Topical Analysis of Interactions Between News and Social Media Ting Hua, Yue Ning, Feng Chen, Chang-Tien Lu, Naren Ramakrishnan 1561 Tracking Idea Flows between Social Groups Yangxin Zhong, Shixia Liu, Xiting Wang, Jiannan Xiao, Yangqiu Song 1201 Modeling Evolving Relationships Between Characters in Literary Novels Snigdha Chaturvedi, Shashank Srivastava, Hal Daume III, Chris Dyer Poster Spotlight Talks 405 Identifying Search",
"title": ""
},
{
"docid": "d40584c70648ec82a4f59d835ddfd1a2",
"text": "Objective To evaluate efficacy of probiotics in prevention and treatment of diarrhoea associated with the use of antibiotics. Design Meta-analysis; outcome data (proportion of patients not getting diarrhoea) were analysed, pooled, and compared to determine odds ratios in treated and control groups. Identification Studies identified by searching Medline between 1966 and 2000 and the Cochrane Library. Studies reviewed Nine randomised, double blind, placebo controlled trials of probiotics. Results Two of the nine studies investigated the effects of probiotics in children. Four trials used a yeast (Saccharomyces boulardii), four used lactobacilli, and one used a strain of enterococcus that produced lactic acid. Three trials used a combination of probiotic strains of bacteria. In all nine trials, the probiotics were given in combination with antibiotics and the control groups received placebo and antibiotics. The odds ratio in favour of active treatment over placebo in preventing diarrhoea associated with antibiotics was 0.39 (95% confidence interval 0.25 to 0.62; P < 0.001) for the yeast and 0.34 (0.19 to 0.61; P < 0.01 for lactobacilli. The combined odds ratio was 0.37 (0.26 to 0.53; P < 0.001) in favour of active treatment over placebo. Conclusions The meta-analysis suggests that probiotics can be used to prevent antibiotic associated diarrhoea and that S boulardii and lactobacilli have the potential to be used in this situation. The efficacy of probiotics in treating antibiotic associated diarrhoea remains to be proved. A further large trial in which probiotics are used as preventive agents should look at the costs of and need for routine use of these agents.",
"title": ""
},
{
"docid": "166230b235fe0c18a80041741a7c5e4a",
"text": "Most existing neural network models for music generation use recurrent neural networks. However, the recent WaveNet model proposed by DeepMind shows that convolutional neural networks (CNNs) can also generate realistic musical waveforms in the audio domain. Following this light, we investigate using CNNs for generating melody (a series of MIDI notes) one bar after another in the symbolic domain. In addition to the generator, we use a discriminator to learn the distributions of melodies, making it a generative adversarial network (GAN). Moreover, we propose a novel conditional mechanism to exploit available prior knowledge, so that the model can generate melodies either from scratch, by following a chord sequence, or by conditioning on the melody of previous bars (e.g. a priming melody), among other possibilities. The resulting model, named MidiNet, can be expanded to generate music with multiple MIDI channels (i.e. tracks). We conduct a user study to compare the melody of eight-bar long generated by MidiNet and by Google’s MelodyRNN models, each time using the same priming melody. Result shows that MidiNet performs comparably with MelodyRNN models in being realistic and pleasant to listen to, yet MidiNet’s melodies are reported to be much more interesting.",
"title": ""
},
{
"docid": "f85a8a7e11a19d89f2709cc3c87b98fc",
"text": "This paper presents novel store-and-forward packet routing algorithms for Wireless Body Area Networks (WBAN) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. On-body DTN routing protocols are then developed using a stochastic link cost formulation, capturing multi-scale topological localities in human postural movements. Performance of the proposed protocols are evaluated experimentally and via simulation, and are compared with a number of existing single-copy DTN routing protocols and an on-body packet flooding mechanism that serves as a performance benchmark with delay lower-bound. It is shown that via multi-scale modeling of the spatio-temporal locality of on-body link disconnection patterns, the proposed algorithms can provide better routing performance compared to a number of existing probabilistic, opportunistic, and utility-based DTN routing protocols in the literature.",
"title": ""
},
{
"docid": "774f1a2403acf459a4eb594c5772a362",
"text": "motion selection DTU Orbit (12/12/2018) ISSARS: An integrated software environment for structure-specific earthquake ground motion selection Current practice enables the design and assessment of structures in earthquake prone areas by performing time history analysis with the use of appropriately selected strong ground motions. This study presents a Matlab-based software environment, which is integrated with a finite element analysis package, and aims to improve the efficiency of earthquake ground motion selection by accounting for the variability of critical structural response quantities. This additional selection criterion, which is tailored to the specific structure studied, leads to more reliable estimates of the mean structural response quantities used in design, while fulfils the criteria already prescribed by the European and US seismic codes and guidelines. To demonstrate the applicability of the software environment developed, an existing irregular, multi-storey, reinforced concrete building is studied for a wide range of seismic scenarios. The results highlight the applicability of the software developed and the benefits of applying a structure-specific criterion in the process of selecting suites of earthquake motions for the seismic design and assessment. (C) 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "da7d45d2cbac784d31e4d3957f4799e6",
"text": "Malicious Uniform Resource Locator (URL) detection is an important problem in web search and mining, which plays a critical role in internet security. In literature, many existing studies have attempted to formulate the problem as a regular supervised binary classification task, which typically aims to optimize the prediction accuracy. However, in a real-world malicious URL detection task, the ratio between the number of malicious URLs and legitimate URLs is highly imbalanced, making it very inappropriate for simply optimizing the prediction accuracy. Besides, another key limitation of the existing work is to assume a large amount of training data is available, which is impractical as the human labeling cost could be potentially quite expensive. To solve these issues, in this paper, we present a novel framework of Cost-Sensitive Online Active Learning (CSOAL), which only queries a small fraction of training data for labeling and directly optimizes two cost-sensitive measures to address the class-imbalance issue. In particular, we propose two CSOAL algorithms and analyze their theoretical performance in terms of cost-sensitive bounds. We conduct an extensive set of experiments to examine the empirical performance of the proposed algorithms for a large-scale challenging malicious URL detection task, in which the encouraging results showed that the proposed technique by querying an extremely small-sized labeled data (about 0.5% out of 1-million instances) can achieve better or highly comparable classification performance in comparison to the state-of-the-art cost-insensitive and cost-sensitive online classification algorithms using a huge amount of labeled data.",
"title": ""
},
{
"docid": "284c52c29b5a5c2d3fbd0a7141353e35",
"text": "This paper presents results of patient experiments using a new gait-phase detection sensor (GPDS) together with a programmable functional electrical stimulation (FES) system for subjects with a dropped-foot walking dysfunction. The GPDS (sensors and processing unit) is entirely embedded in a shoe insole and detects in real time four phases (events) during the gait cycle: stance, heel off, swing, and heel strike. The instrumented GPDS insole consists of a miniature gyroscope that measures the angular velocity of the foot and three force sensitive resistors that measure the force load on the shoe insole at the heel and the metatarsal bones. The extracted gait-phase signal is transmitted from the embedded microcontroller to the electrical stimulator and used in a finite state control scheme to time the electrical stimulation sequences. The electrical stimulations induce muscle contractions in the paralyzed muscles leading to a more physiological motion of the affected leg. The experimental results of the quantitative motion analysis during walking of the affected and nonaffected sides showed that the use of the combined insole and FES system led to a significant improvement in the gait-kinematics of the affected leg. This combined sensor and stimulation system has the potential to serve as a walking aid for rehabilitation training or permanent use in a wide range of gait disabilities after brain stroke, spinal-cord injury, or neurological diseases.",
"title": ""
},
{
"docid": "494d720d5a8c7c58b795c5c6131fa8d1",
"text": "The increasing emergence of pervasive information systems requires a clearer understanding of the underlying characteristics in relation to user acceptance. Based on the integration of UTAUT2 and three pervasiveness constructs, we derived a comprehensive research model to account for pervasive information systems. Data collected from 346 participants in an online survey was analyzed to test the developed model using structural equation modeling and taking into account multigroup analysis. The results confirm the applicability of the integrated UTAUT2 model to measure pervasiveness. Implications for research and practice are discussed together with future research opportunities.",
"title": ""
},
{
"docid": "a34e04069b232309b39994d21bb0f89a",
"text": "In the near future, i.e., beyond 4G, some of the prime objectives or demands that need to be addressed are increased capacity, improved data rate, decreased latency, and better quality of service. To meet these demands, drastic improvements need to be made in cellular network architecture. This paper presents the results of a detailed survey on the fifth generation (5G) cellular network architecture and some of the key emerging technologies that are helpful in improving the architecture and meeting the demands of users. In this detailed survey, the prime focus is on the 5G cellular network architecture, massive multiple input multiple output technology, and device-to-device communication (D2D). Along with this, some of the emerging technologies that are addressed in this paper include interference management, spectrum sharing with cognitive radio, ultra-dense networks, multi-radio access technology association, full duplex radios, millimeter wave solutions for 5G cellular networks, and cloud technologies for 5G radio access networks and software defined networks. In this paper, a general probable 5G cellular network architecture is proposed, which shows that D2D, small cell access points, network cloud, and the Internet of Things can be a part of 5G cellular network architecture. A detailed survey is included regarding current research projects being conducted in different countries by research groups and institutions that are working on 5G technologies.",
"title": ""
},
{
"docid": "ccb1634d00239d7b08946144f3e0a763",
"text": "Designing RNA sequences that fold into specific structures and perform desired biological functions is an emerging field in bioengineering with broad applications from intracellular chemical catalysis to cancer therapy via selective gene silencing. Effective RNA design requires first solving the inverse folding problem: given a target structure, propose a sequence that folds into that structure. Although significant progress has been made in developing computational algorithms for this purpose, current approaches are ineffective at designing sequences for complex targets, limiting their utility in real-world applications. However, an alternative that has shown significantly higher performance are human players of the online RNA design game EteRNA. Through many rounds of gameplay, these players have developed a collective library of \"human\" rules and strategies for RNA design that have proven to be more effective than current computational approaches, especially for complex targets. Here, we present an RNA design agent, SentRNA, which consists of a fully-connected neural network trained using the eternasolves dataset, a set of 1.8 x 104 player-submitted sequences across 724 unique targets. The agent first predicts an initial sequence for a target using the trained network, and then refines that solution if necessary using a short adaptive walk utilizing a canon of standard design moves. Through this approach, we observe SentRNA can learn and apply human-like design strategies to solve several complex targets previously unsolvable by any computational approach. We thus demonstrate that incorporating a prior of human design strategies into a computational agent can significantly boost its performance, and suggests a new paradigm for machine-based RNA design. Introduction: Solving the inverse folding problem for RNA is a critical prerequisite to effective RNA design, an emerging field of modern bioengineering research.1,2,3,4,5 A RNA molecule's function is highly dependent on the structure into which it folds, which in turn is determined by the sequence of nucleotides that comprise it. Therefore, designing RNA molecules to perform specific functions requires designing sequences that fold into specific structures. As such, significant efforts have been made over the past several decades in developing computational algorithms to reliably predict RNA sequences that fold into a given target.6,7,8,9,10,11 Existing computational methods for inverse RNA folding can be roughly separated into two types. The first type generates an initial guess of a sequence and then refines the sequence using some form of stochastic search. Published algorithms that fall under this category include RNAInverse,6 RNA-SSD,7 INFO-RNA,8 NUPACK,10 and MODENA.11 RNAInverse, one of the first inverse folding algorithms, initializes the sequence randomly and then uses a simple adaptive walk in which random single or pair mutations are successively performed, and a mutation is accepted if it improves the structural similarity between the current and the target structure. RNA-SSD first performs hierarchical decomposition of the target and then performs adaptive walk separately on each substructure to reduce the size of the search space. INFO-RNA first generates an initial guess of the sequence using dynamic programming to estimate the minimum energy sequence for a target structure, and then performs simulated annealing on the sequence. NUPACK performs hierarchical decomposition of the target and assigns an initial sequence to each substructure. For each sequence, it then generates a thermodynamic ensemble of possible structures and stochastically perturbs the sequence to optimize the \"ensemble defect\" term, which represents the average number of improperly paired bases relative to the target over the entire ensemble. Finally, one of the most recent algorithms, MODENA, generates an ensemble of initial sequences using a genetic algorithm, and then performs stochastic search using crossovers and single-point mutations. The second type of design algorithm, exemplified by programs such as DSS-Opt, foregoes stochastic search and instead attempts to generate a valid sequence directly from gradient-based optimization. Given a target, DSS-Opt generates an initial sequence and then performs a gradient-based optimization of an objective function that includes the predicted free energy of the target and a \"negative design\" term that punishes improperly paired bases. Both types of algorithms have proven effective given simple to moderately complex structures. However, there is still much room for improvement. A recent benchmark of these algorithms showed that they consistently fail given large or structurally complex targets, 12 limiting their applicability to designing RNA molecules for real-world biological applications. A promising alternative approach to RNA design that has consistently outperformed current computational methods is EteRNA, a web-based graphical interface in which the RNA design problem is presented to humans as a game. 13 Players of the game are shown 2D representations of target RNA structures (\"puzzles\") and asked to propose sequences that fold into them. These sequences are first judged using the ViennaRNA 1.8.5 software package6 and then validated experimentally. Through this cycle of design and evaluation, players build a collective library of design strategies that can then be applied to new, more complex puzzles. These strategies are distinct from those employed by design algorithms such as DSS-Opt and NUPACK in that they are honed through visual pattern recognition and experience. Remarkably, these human-developed strategies have proven more effective for RNA design than current computational methods. For example, EteRNA players significantly outperform even the best computational algorithms on the Eterna100, a set of 100 challenging puzzles designed by EteRNA players to showcase a variety of RNA structural elements that make design difficult. While top-ranking human players can solve all 100 puzzles, even the best-scoring computational algorithm, MODENA, could only solve 54 / 100 puzzles.12 Given the success of these strategies, we decided to investigate whether incorporating these strategies into a computational agent can increase its performance beyond that of current state-of-the-art methods. In this study, we present SentRNA, a computational agent for RNA design that significantly outperforms existing computational algorithms by learning human-like design strategies in a data driven manner. The agent consists of a fully-connected neural network that takes as input a featurized representation of the local environment around a given position in a puzzle. The output is length-4, corresponding to the four RNA nucleotides (bases): A, U, C, or G. The model is trained using the eternasolves dataset, a custom-compiled collection of 1.8 x 104 playersubmitted solutions across 724 unique puzzles. These puzzles comprise both the “Progression” puzzles, designed for beginning EteRNA players, as well as several “Lab” puzzles for which solutions were experimentally synthesized and tested. During validation and testing the agent takes an initially blank puzzle and assigns bases to every position greedily based on the output values. If this initial prediction is not valid, as judged by ViennaRNA 1.8.5, it is further refined via an adaptive walk using a canon of standard design moves compiled by players and taught to new players through the game's puzzle progression. Overall, we trained and tested an ensemble of 165 models, each using a distinct training set and model input (see Methods). Collectively, the ensemble of models can solve 42 / 100 puzzles from the Eterna100 by neural network prediction alone, and 80 / 100 puzzles using neural network prediction + refinement. Among these 80 puzzles are all 15 puzzles highlighted during a previous benchmark by Anderson Lee et al.12 Notably, among these 15 puzzles are 7 puzzles yet unsolvable by any computational algorithm. This study demonstrates that teaching human design strategies to a computational RNA design agent in a data-driven manner can lead to significant increases in performance over previous methods, and represents a new paradigm in machine-based RNA design in which both human and computational design strategies are united into a single agent. Methods: Code availability: The source code for SentRNA, all our trained models, and the full eternasolves dataset can be found on GitHub: https://github.com/jadeshi/SentRNA. Hardware: We performed all computations (training, validation, testing, and refinement) using a desktop computer with an Intel Core i7-6700K @ 4.00 GHz CPU and 16 GB of RAM. Creating 2D structural representations of puzzles: During training and testing of almost all models, we used the RNAplot function from ViennaRNA 1.8.5 to render puzzles as 2D structures given their dot-bracket representations. However, when training and testing two specific models M6 and M8 on two highly symmetric puzzles, “Mat Lot 2-2 B\" and “Mutated chicken feet” (see Results and Discussion), we decided to use an in-house rendering algorithm (hereafter called EteRNA rendering) in place of RNAplot, as we found the RNAplot was unable to properly render the symmetric structure of the puzzles. Neural network architecture: Our goal is to create an RNA design agent that can propose a sequence of RNA bases that folds into a given target structure, i.e. fill in an initially blank EteRNA puzzle. To do this, we employ a fully connected neural network that assigns an identity of A, U, C, or G to each position in the puzzle given a featurized representation of its local environment. During test time, we expose the agent to every position in the puzzle sequentially and have it predict its identity. The neural network was implemented using TensorFlow14 and contains three hidden layers of 100 nodes with ReLU nonlinearitie",
"title": ""
},
{
"docid": "b68336c869207720d6ab1880744b70be",
"text": "Particle Swarm Optimization (PSO) algorithms represent a new approach for optimization. In this paper image enhancement is considered as an optimization problem and PSO is used to solve it. Image enhancement is mainly done by maximizing the information content of the enhanced image with intensity transformation function. In the present work a parameterized transformation function is used, which uses local and global information of the image. Here an objective criterion for measuring image enhancement is used which considers entropy and edge information of the image. We tried to achieve the best enhanced image according to the objective criterion by optimizing the parameters used in the transformation function with the help of PSO. Results are compared with other enhancement techniques, viz. histogram equalization, contrast stretching and genetic algorithm based image enhancement.",
"title": ""
},
{
"docid": "d34d8dd7ba59741bb5e28bba3e870ac4",
"text": "Among those who have recently lost a job, social networks in general and online ones in particular may be useful to cope with stress and find new employment. This study focuses on the psychological and practical consequences of Facebook use following job loss. By pairing longitudinal surveys of Facebook users with logs of their online behavior, we examine how communication with different kinds of ties predicts improvements in stress, social support, bridging social capital, and whether they find new jobs. Losing a job is associated with increases in stress, while talking with strong ties is generally associated with improvements in stress and social support. Weak ties do not provide these benefits. Bridging social capital comes from both strong and weak ties. Surprisingly, individuals who have lost a job feel greater stress after talking with strong ties. Contrary to the \"strength of weak ties\" hypothesis, communication with strong ties is more predictive of finding employment within three months.",
"title": ""
},
{
"docid": "bbd378407abb1c2a9a5016afee40c385",
"text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.",
"title": ""
},
{
"docid": "47a12c3101f0aa6cd7f9675a211bcfae",
"text": "This paper describes the OpenViBE software platform which enables researchers to design, test, and use braincomputer interfaces (BCIs). BCIs are communication systems that enable users to send commands to computers solely by means of brain activity. BCIs are gaining interest among the virtual reality (VR) community since they have appeared as promising interaction devices for virtual environments (VEs). The key features of the platform are (1) high modularity, (2) embedded tools for visualization and feedback based on VR and 3D displays, (3) BCI design made available to non-programmers thanks to visual programming, and (4) various tools offered to the different types of users. The platform features are illustrated in this paper with two entertaining VR applications based on a BCI. In the first one, users can move a virtual ball by imagining hand movements, while in the second one, they can control a virtual spaceship using real or imagined foot movements. Online experiments with these applications together with the evaluation of the platform computational performances showed its suitability for the design of VR applications controlled with a BCI. OpenViBE is a free software distributed under an open-source license.",
"title": ""
},
{
"docid": "0be69ebc6297e7a4fb71594d7c38cb86",
"text": "Internet of Things (IoT), which will create a huge network of billions or trillions of “Things” communicating with one another, are facing many technical and application challenges. This paper introduces the status of IoT development in China, including policies, R&D plans, applications, and standardization. With China's perspective, this paper depicts such challenges on technologies, applications, and standardization, and also proposes an open and general IoT architecture consisting of three platforms to meet the architecture challenge. Finally, this paper discusses the opportunity and prospect of IoT.",
"title": ""
}
] | scidocsrr |
33d135c9f4ebfeeaa18eb4709484cf7c | Sentiment analysis of Twitter data within big data distributed environment for stock prediction | [
{
"docid": "f9824ae0b73ebecf4b3a893392e77d67",
"text": "This paper proposes genetic algorithms (GAs) approach to feature discretization and the determination of connection weights for artificial neural networks (ANNs) to predict the stock price index. Previous research proposed many hybrid models of ANN and GA for the method of training the network, feature subset selection, and topology optimization. In most of these studies, however, GA is only used to improve the learning algorithm itself. In this study, GA is employed not only to improve the learning algorithm, but also to reduce the complexity in feature space. GA optimizes simultaneously the connection weights between layers and the thresholds for feature discretization. The genetically evolved weights mitigate the well-known limitations of the gradient descent algorithm. In addition, globally searched feature discretization reduces the dimensionality of the feature space and eliminates irrelevant factors. Experimental results show that GA approach to the feature discretization model outperforms the other two conventional models. q 2000 Published by Elsevier Science Ltd.",
"title": ""
}
] | [
{
"docid": "e364a2ac82f42c87f88b6ed508dc0d8e",
"text": "In order to work well, many computer vision algorithms require that their parameters be adjusted according to the image noise level, making it an important quantity to estimate. We show how to estimate an upper bound on the noise level from a single image based on a piecewise smooth image prior model and measured CCD camera response functions. We also learn the space of noise level functions how noise level changes with respect to brightness and use Bayesian MAP inference to infer the noise level function from a single image. We illustrate the utility of this noise estimation for two algorithms: edge detection and featurepreserving smoothing through bilateral filtering. For a variety of different noise levels, we obtain good results for both these algorithms with no user-specified inputs.",
"title": ""
},
{
"docid": "a05d87b064ab71549d373599700cfcbf",
"text": "We provide sets of parameters for multiplicative linear congruential generators (MLCGs) of different sizes and good performance with respect to the spectral test. For ` = 8, 9, . . . , 64, 127, 128, we take as a modulus m the largest prime smaller than 2`, and provide a list of multipliers a such that the MLCG with modulus m and multiplier a has a good lattice structure in dimensions 2 to 32. We provide similar lists for power-of-two moduli m = 2`, for multiplicative and non-multiplicative LCGs.",
"title": ""
},
{
"docid": "46f95796996d4638afcc7b703a1f3805",
"text": "One of the main challenges in Grid systems is designing an adaptive, scalable, and model-independent method for job scheduling to achieve a desirable degree of load balancing and system efficiency. Centralized job scheduling methods have some drawbacks, such as single point of failure and lack of scalability. Moreover, decentralized methods require a coordination mechanism with limited communications. In this paper, we propose a multi-agent approach to job scheduling in Grid, named Centralized Learning Distributed Scheduling (CLDS), by utilizing the reinforcement learning framework. The CLDS is a model free approach that uses the information of jobs and their completion time to estimate the efficiency of resources. In this method, there are a learner agent and several scheduler agents that perform the task of learning and job scheduling with the use of a coordination strategy that maintains the communication cost at a limited level. We evaluated the efficiency of the CLDS method by designing and performing a set of experiments on a simulated Grid system under different system scales and loads. The results show that the CLDS can effectively balance the load of system even in large scale and heavy loaded Grids, while maintains its adaptive performance and scalability.",
"title": ""
},
{
"docid": "32705e53544e1b6328d44b1086f91f0c",
"text": "The aim of this study was the evaluation and prediction of profile changes after Le Fort I osteotomy including maxillary impaction and subsequent autorotation of the mandible. A group of 42 patients (32 female, 10 male) underwent a Le Fort I osteotomy with posterior impaction after preoperative orthodontic treatment. No surgical intervention in the mandible was performed. Pre- and postoperative lateral cephalograms of each patient were analyzed in two steps using the Wilcoxon and Mann-Whitney U test. All patients were evaluated for vertical and sagittal skeletal and soft tissue changes. These results led to further classification into three groups according to the type and extent of maxillary impaction. These groups included parallel impaction, posterior impaction with additional anterior subsidence, and posterior impaction only. The results of the first evaluation step revealed that the chin had advanced on average by 79%, while the lower face was shortened by as much as 70% in the pogonion point. However, the second evaluation showed that the type and extent of maxillary impaction led to significant changes in these parameters. Parallel maxillary impaction resulted in 100%, posterior impaction in 80% and posterior impaction with anterior subsidence in 50% advancement of the mandible in the pogonion point in relation to the distance covered during impaction. This study showed that the change in the facial profile caused by autorotation of the mandible after Le Fort I osteotomy and maxillary impaction can be predicted in relation to the dimensions of maxillary impaction. Ziel der Untersuchung war eine Auswertung und Vorhersage der Profilveränderungen nach LeFort-I-Osteotomie mit maxillärer Impaktion und subsequenter Autorotation des Unterkiefers. Die Patientengruppe bestand aus 42 Patienten (32 weiblich, 10 männlich), bei welchen nach kieferorthopädischer Vorbehandlung eine LeFort-I-Osteotomie mit dorsaler Impaktion ohne Eingriff im Unterkiefer erfolgte. Die präund postoperativen Fernröntgenseitenbilder wurden in zwei Stufen untersucht. Die statistische Auswertung erfolgte mit dem Wilcoxon- und Mann-Whitney-U-Test. Bei allen Patienten wurden die vertikalen und sagittalen skelettalen und Weichteilveränderungen vermessen. Die Ergebnisse führten zu einer weiteren Aufteilung in drei Gruppen nach Art und Ausmaß der maxillären Impaktion. Diese Gruppen beinhalteten die parallele Impaktion, die posteriore Impaktion mit zusätzlicher anteriorer Absenkung und die alleinige posteriore Impaktion. Die Ergebnisse des ersten Auswertungsschrittes zeigten eine circa 79%ige Vorverlagerung des Kinns bei einer circa 70%igen Verkürzung des Untergesichts im Punkt Pogonion. Der zweite Auswertungsschritt zeigte jedoch, dass Art und Ausmaß der maxillären Impaktion zu signifikanten Änderungen dieser Parameter führten. Bei paralleler Impaktion ergab sich eine 100%ige, bei dorsaler Impaktion eine 80%ige und bei dorsaler Impaktion mit anteriorer Absenkung eine 50%ige Übertragung der Impaktionsstrecke auf die Vorverlagerung des Unterkiefers gemessen am Punkt Pogonion. Die Untersuchung zeigt, dass die Profilveränderung durch Autorotation der Mandibula nach LeFort-I-Osteotomie und maxilläre Impaktion in Relation zur Dimension der maxillären Impaktion vorhergesagt werden kann.",
"title": ""
},
{
"docid": "98e069b5cfa44a3d412b16ceb809fa51",
"text": "Mutations in EMBRYONIC FLOWER (EMF) genes EMF1 and EMF2 abolish rosette development, and the mutants produce either a much reduced inflorescence or a transformed flower. These mutant characteristics suggest a repressive effect of EMF activities on reproductive development. To investigate the role of EMF genes in regulating reproductive development, we studied the relationship between EMF genes and the genes regulating inflorescence and flower development. We found that APETALA1 and AGAMOUS promoters were activated in germinating emf seedlings, suggesting that these genes may normally be suppressed in wild-type seedlings in which EMF activities are high. The phenotype of double mutants combining emf1-2 and apetala1, apetala2, leafy1, apetala1 cauliflower, and terminal flower1 showed that emf1-2 is epistatic in all cases, suggesting that EMF genes act downstream from these genes in mediating the inflorescence-to-flower transition. Constitutive expression of LEAFY in weak emf1, but not emf2, mutants increased the severity of the emf phenotype, indicating an inhibition of EMF activity by LEAFY, as was deduced from double mutant analysis. These results suggest that a mechanism involving a reciprocal negative regulation between the EMF genes and the floral genes regulates Arabidopsis inflorescence development.",
"title": ""
},
{
"docid": "3b7dcbefbbc20ca1a37fa318c2347b4c",
"text": "To better understand how individual differences influence the use of information technoiogy (IT), this study models and tests relationships among dynamic, IT-specific individual differences (i.e.. computer self-efficacy and computer anxiety). stable, situation-specific traits (i.e., personal innovativeness in IT) and stable, broad traits (i.e.. ''Cynthia Beath was the accepting senior editor for this paper. trait anxiety and negative affectivity). When compared to broad traits, the model suggests that situation-specific traits exert a more pervasive influence on IT situation-specific individual differences. Further, the modei suggests that computer anxiety mediates the influence of situationspecific traits (i.e., personal innovativeness) on computer self-efficacy. Results provide support for many of the hypothesized relationships. From a theoretical perspective, the findings help to further our understanding of the nomological network among individual differences that lead to computer self-efficacy. From a practical perspective, the findings may help IT managers design training programs that more effectiveiy increase the computer self-efficacy of users with different dispositional characteristics.",
"title": ""
},
{
"docid": "b84d6210438144ebe20271ceaffc28a3",
"text": "Although precision agriculture has been adopted in few countries; the agriculture industry in India still needs to be modernized with the involvement of technologies for better production, distribution and cost control. In this paper we proposed a multidisciplinary model for smart agriculture based on the key technologies: Internet-of-Things (IoT), Sensors, Cloud-Computing, MobileComputing, Big-Data analysis. Farmers, AgroMarketing agencies and Agro-Vendors need to be registered to the AgroCloud module through MobileApp module. AgroCloud storage is used to store the details of farmers, periodic soil properties of farmlands, agro-vendors and agro-marketing agencies, Agro e-governance schemes and current environmental conditions. Soil and environment properties are sensed and periodically sent to AgroCloud through IoT (Beagle Black Bone). Bigdata analysis on AgroCloud data is done for fertilizer requirements, best crop sequences analysis, total production, and current stock and market requirements. Proposed model is beneficial for increase in agricultural production and for cost control of Agro-products.",
"title": ""
},
{
"docid": "8f01f446890deb021ed6c6bead0b681a",
"text": "Three experiments explored whether conceptual mappings in conventional metaphors are productive, by testing whether the comprehension of novel metaphors was facilitated by first reading conceptually related conventional metaphors. The first experiment, a replication and extension of Keysar et al. [Keysar, B., Shen, Y., Glucksberg, S., Horton, W. (2000). Conventional language: How metaphorical is it? Journal of Memory and Language 43, 576–593] (Experiment 2), found no such facilitation; however, in the second experiment, upon re-designing and improving the stimulus materials, facilitation was demonstrated. In a final experiment, this facilitation was shown to be specific to the conceptual mappings involved. The authors argue that metaphor productivity provides a communicative advantage and that this may be sufficient to explain the clustering of metaphors into families noted by Lakoff and Johnson [Lakoff & Johnson, M. (1980a). The metaphorical structure of the human conceptual system. Cognitive Science 4, 195–208]. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "dc2a55da87c78acfd4413ddebdec6a1c",
"text": "The past decade has seen an explosion in the amount of digital information stored in electronic health records (EHRs). While primarily designed for archiving patient information and performing administrative healthcare tasks like billing, many researchers have found secondary use of these records for various clinical informatics applications. Over the same period, the machine learning community has seen widespread advances in the field of deep learning. In this review, we survey the current research on applying deep learning to clinical tasks based on EHR data, where we find a variety of deep learning techniques and frameworks being applied to several types of clinical applications including information extraction, representation learning, outcome prediction, phenotyping, and deidentification. We identify several limitations of current research involving topics such as model interpretability, data heterogeneity, and lack of universal benchmarks. We conclude by summarizing the state of the field and identifying avenues of future deep EHR research.",
"title": ""
},
{
"docid": "8098f931296961e980a546b81781b5f9",
"text": "The problem of entity-typing has been studied predominantly in supervised learning fashion, mostly with task-specific annotations (for coarse types) and sometimes with distant supervision (for fine types). While such approaches have strong performance within datasets, they often lack the flexibility to transfer across text genres and to generalize to new type taxonomies. In this work we propose a zero-shot entity typing approach that requires no annotated data and can flexibly identify newly defined types. Given a type taxonomy defined as Boolean functions of FREEBASE “types”, we ground a given mention to a set of type-compatible Wikipedia entries and then infer the target mention’s types using an inference algorithm that makes use of the types of these entries. We evaluate our system on a broad range of datasets, including standard fine-grained and coarse-grained entity typing datasets, and also a dataset in the biological domain. Our system is shown to be competitive with state-of-theart supervised NER systems and outperforms them on out-of-domain datasets. We also show that our system significantly outperforms other zero-shot fine typing systems.",
"title": ""
},
{
"docid": "9b1bf9930b378232d03c43c007d1c151",
"text": "Matrix factorization has found incredible success and widespread application as a collaborative filtering based approach to recommendations. Unfortunately, incorporating additional sources of evidence, especially ones that are incomplete and noisy, is quite difficult to achieve in such models, however, is often crucial for obtaining further gains in accuracy. For example, additional information about businesses from reviews, categories, and attributes should be leveraged for predicting user preferences, even though this information is often inaccurate and partially-observed. Instead of creating customized methods that are specific to each type of evidences, in this paper we present a generic approach to factorization of relational data that collectively models all the relations in the database. By learning a set of embeddings that are shared across all the relations, the model is able to incorporate observed information from all the relations, while also predicting all the relations of interest. Our evaluation on multiple Amazon and Yelp datasets demonstrates effective utilization of additional information for held-out preference prediction, but further, we present accurate models even for the cold-starting businesses and products for which we do not observe any ratings or reviews. We also illustrate the capability of the model in imputing missing information and jointly visualizing words, categories, and attribute factors.",
"title": ""
},
{
"docid": "fab193a1d2f6ed59c72355aa8f415fa3",
"text": "All bacteria form persisters, cells that are multidrug tolerant and therefore able to survive antibiotic treatment. Due to the low frequencies of persisters in growing bacterial cultures and the complex underlying molecular mechanisms, the phenomenon has been challenging to study. However, recent technological advances in microfluidics and reporter genes have improved this scenario. Here, we summarize recent progress in the field, revealing the ubiquitous bacterial stress alarmone ppGpp as an emerging central regulator of multidrug tolerance and persistence, both in stochastically and environmentally induced persistence. In several different organisms, toxin-antitoxin modules function as effectors of ppGpp-induced persistence.",
"title": ""
},
{
"docid": "d4543989119d41154c1a34337de3f620",
"text": "This paper reports self-powered, autonomously operated bidirectional solid state circuit breakers (SSCB) with two back-to-back connected normally-on SiC JFETs as the main static switch for DC power systems. The SSCBs detect short circuit faults by sensing the sudden voltage rise between its two power terminals in either direction, and draws power from the fault condition itself to turn and hold off the SiC JFETs. The two-terminal SSCB can be directly placed in a circuit branch without requiring any external power supply or extra wiring. A low-power, fast-starting, isolated DC/DC converter is designed and optimized to activate the SSCB in response to a short circuit fault. The SSCB prototypes are experimentally demonstrated to interrupt fault currents up to 150 amperes at a DC bus voltage of 400 volts within 0.7 microseconds.",
"title": ""
},
{
"docid": "14b15f15cb7dbb3c19a09323b4b67527",
"text": " Establishing mechanisms for sharing knowledge and technology among experts in different fields related to automated de-identification and reversible de-identification Providing innovative solutions for concealing, or removal of identifiers while preserving data utility and naturalness Investigating reversible de-identification and providing a thorough analysis of security risks of reversible de-identification Providing a detailed analysis of legal, ethical and social repercussion of reversible/non-reversible de-identification Promoting and facilitating the transfer of knowledge to all stakeholders (scientific community, end-users, SMEs) through workshops, conference special sessions, seminars and publications",
"title": ""
},
{
"docid": "ae12d709da329eea3cc8e49c98c21518",
"text": "This paper aims to explore how socialand self-factors may affect consumers’ brand loyalty while they follow companies’ microblogs. Drawing upon the commitment-trust theory, social influence theory, and self-congruence theory, we propose that network externalities, social norms, and self-congruence are the key determinants in the research model. The impacts of these factors on brand loyalty will be mediated by brand trust and brand commitment. We empirically test the model through an online survey on an existing microblogging site. The findings illustrate that network externalities and self-congruence can positively affect brand trust, which subsequently leads to brand commitment and brand loyalty. Meanwhile, social norms, together with self-congruence, directly posit influence on brand commitment. Brand commitment is then positively associated with brand loyalty. We believe that the findings of this research can contribute to the literature. We offer new insights regarding how consumers’ brand loyalty develops from the two social-factors and their self-congruence with the brand. Company managers could also apply our findings to strengthen their relationship marketing with consumers on microblogging sites.",
"title": ""
},
{
"docid": "5a9f6b9f6f278f5f3359d5d58b8516a8",
"text": "BACKGROUND\nMusculoskeletal disorders (MSDs) that result from poor ergonomic design are one of the occupational disorders of greatest concern in the industrial sector. A key advantage in the primary design phase is to focus on a method of assessment that detects and evaluates the potential risks experienced by the operative when faced with these types of physical injuries. The method of assessment will improve the process design identifying potential ergonomic improvements from various design alternatives or activities undertaken as part of the cycle of continuous improvement throughout the differing phases of the product life cycle.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nThis paper presents a novel postural assessment method (NERPA) fit for product-process design, which was developed with the help of a digital human model together with a 3D CAD tool, which is widely used in the aeronautic and automotive industries. The power of 3D visualization and the possibility of studying the actual assembly sequence in a virtual environment can allow the functional performance of the parts to be addressed. Such tools can also provide us with an ergonomic workstation design, together with a competitive advantage in the assembly process.\n\n\nCONCLUSIONS\nThe method developed was used in the design of six production lines, studying 240 manual assembly operations and improving 21 of them. This study demonstrated the proposed method's usefulness and found statistically significant differences in the evaluations of the proposed method and the widely used Rapid Upper Limb Assessment (RULA) method.",
"title": ""
},
{
"docid": "4b2e4a1bd3c6f6af713e507f1d63ba07",
"text": "Model validation constitutes a very important step in system dynamics methodology. Yet, both published and informal evidence indicates that there has been little effort in system dynamics community explicitly devoted to model validity and validation. Validation is a prolonged and complicated process, involving both formal/quantitative tools and informal/ qualitative ones. This paper focuses on the formal aspects of validation and presents a taxonomy of various aspects and steps of formal model validation. First, there is a very brief discussion of the philosophical issues involved in model validation, followed by a flowchart that describes the logical sequence in which various validation activities must be carried out. The crucial nature of structure validity in system dynamics (causal-descriptive) models is emphasized. Then examples are given of specific validity tests used in each of the three major stages of model validation: Structural tests. Introduction",
"title": ""
},
{
"docid": "dfc618f0ef6497d8ad45aab5396da9db",
"text": "Beginning in the mid-1990s, a number of consultants independently created and evolved what later came to be known as agile software development methodologies. Agile methodologies and practices emerged as an attempt to more formally and explicitly embrace higher rates of change in software requirements and customer expectations. Some prominent agile methodologies are Adaptive Software Development, Crystal, Dynamic Systems Development Method, Extreme Programming (XP), Feature-Driven Development (FDD), Pragmatic Programming, and Scrum. This chapter presents the principles that underlie and unite the agile methodologies. Then, 32 practices used in agile methodologies are presented. Finally, three agile methodologies (XP, FDD, and Scrum) are explained. Most often, software development teams select a subset of the agile practices and create their own hybrid software development methodology rather than strictly adhere to all the practices of a predefined agile methodology. Teams that use primarily agile practices are most often smallto medium-sized, colocated teams working on less complex projects. 1. A gile Origins and Manifesto . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2. A gile and Lean Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 .1.",
"title": ""
},
{
"docid": "0bd86b41fb7a183b5ea0f2e7836040ab",
"text": "Profiling driving behavior has become a relevant aspect in fleet management, automotive insurance and eco-driving. Detecting inefficient or aggressive drivers can help reducing fleet degradation, insurance policy cost and fuel consumption. In this paper, we present a Fuzzy-Logic based driver scoring mechanism that uses smartphone sensing data, including accelerometers and GPS. In order to evaluate the proposed mechanism, we have collected traces from a testbed consisting in 20 vehicles equipped with an Android sensing application we have developed to this end. The results show that the proposed sensing variables using smartphones can be merged to provide each driver with a single score.",
"title": ""
}
] | scidocsrr |
1c3151bf8bff05862f34c000aef57f7c | Procrastination, deadlines, and performance: self-control by precommitment. | [
{
"docid": "08823059d089c1e553af85d5768332ca",
"text": "Hyperbolic discount functions induce dynamically inconsistent preferences, implying a motive for consumers to constrain their own future choices. This paper analyzes the decisions of a hyperbolic consumer who has access to an imperfect commitment technology: an illiquid asset whose sale must be initiated one period before the sale proceeds are received. The model predicts that consumption tracks income, and the model explains why consumers have asset-specic marginal propensities to consume. The model suggests that nancial innovation may have caused the ongoing decline in U. S. savings rates, since nancial innovation increases liquidity, eliminating commitment opportunities. Finally, the model implies that nancial market innovation may reduce welfare by providing “too much” liquidity.",
"title": ""
},
{
"docid": "afffadc35ac735d11e1a415c93d1c39f",
"text": "We examine self-control problems — modeled as time-inconsistent, presentbiased preferences—in a model where a person must do an activity exactly once. We emphasize two distinctions: Do activities involve immediate costs or immediate rewards, and are people sophisticated or naive about future self-control problems? Naive people procrastinate immediate-cost activities and preproperate—do too soon—immediate-reward activities. Sophistication mitigates procrastination, but exacerbates preproperation. Moreover, with immediate costs, a small present bias can severely harm only naive people, whereas with immediate rewards it can severely harm only sophisticated people. Lessons for savings, addiction, and elsewhere are discussed. (JEL A12, B49, C70, D11, D60, D74, D91, E21)",
"title": ""
},
{
"docid": "a25041f4b95b68d2b8b9356d2f383b69",
"text": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource.",
"title": ""
}
] | [
{
"docid": "bfe9b8e84da087cfd3a3d8ece6dc9b9d",
"text": "Microblog ranking is a hot research topic in recent years. Most of the related works apply TF-IDF metric for calculating content similarity while neglecting their semantic similarity. And most existing search engines which retrieve the microblog list by string matching the search keywords is not competent to provide a reliable list for users when dealing with polysemy and synonym. Besides, treating all the users with same authority for all topics is intuitively not ideal. In this paper, a comprehensive strategy for microblog ranking is proposed. First, we extend the conventional TF-IDF based content similarity with exploiting knowledge from WordNet. Then, we further incorporate a new feature for microblog ranking that is the topical relation between search keyword and its retrieval. Author topical authority is also incorporated into the ranking framework as an important feature for microblog ranking. Gradient Boosting Decision Tree(GBDT), then is employed to train the ranking model with multiple features involved. We conduct thorough experiments on a large-scale real-world Twitter dataset and demonstrate that our proposed approach outperform a number of existing approaches in discovering higher quality and more related microblogs.",
"title": ""
},
{
"docid": "dfdf2581010777e51ff3e29c5b9aee7f",
"text": "This paper proposes a parallel architecture with resistive crosspoint array. The design of its two essential operations, read and write, is inspired by the biophysical behavior of a neural system, such as integrate-and-fire and local synapse weight update. The proposed hardware consists of an array with resistive random access memory (RRAM) and CMOS peripheral circuits, which perform matrix-vector multiplication and dictionary update in a fully parallel fashion, at the speed that is independent of the matrix dimension. The read and write circuits are implemented in 65 nm CMOS technology and verified together with an array of RRAM device model built from experimental data. The overall system exploits array-level parallelism and is demonstrated for accelerated dictionary learning tasks. As compared to software implementation running on a 8-core CPU, the proposed hardware achieves more than 3000 × speedup, enabling high-speed feature extraction on a single chip.",
"title": ""
},
{
"docid": "cf7b17b690258dc50ec12bfbd9de232d",
"text": "In this paper, we propose a novel method for visual object tracking called HMMTxD. The method fuses observations from complementary out-of-the box trackers and a detector by utilizing a hidden Markov model whose latent states correspond to a binary vector expressing the failure of individual trackers. The Markov model is trained in an unsupervised way, relying on an online learned detector to provide a source of tracker-independent information for a modified BaumWelch algorithm that updates the model w.r.t. the partially annotated data. We show the effectiveness of the proposed method on combination of two and three tracking algorithms. The performance of HMMTxD is evaluated on two standard benchmarks (CVPR2013 and VOT) and on a rich collection of 77 publicly available sequences. The HMMTxD outperforms the state-of-the-art, often significantly, on all datasets in almost all criteria.",
"title": ""
},
{
"docid": "37e561a8dd29299dee5de2cb7781c5a3",
"text": "The management of knowledge and experience are key means by which systematic software development and process improvement occur. Within the domain of software engineering (SE), quality continues to remain an issue of concern. Although remedies such as fourth generation programming languages, structured techniques and object-oriented technology have been promoted, a \"silver bullet\" has yet to be found. Knowledge management (KM) gives organisations the opportunity to appreciate the challenges and complexities inherent in software development. We report on two case studies that investigate KM in SE at two IT organisations. Structured interviews were conducted, with the assistance of a qualitative questionnaire. The results were used to describe current practices for KM in SE, to investigate the nature of KM activities in these organisations, and to explain the impact of leadership, technology, culture and measurement as enablers of the KM process for SE.",
"title": ""
},
{
"docid": "ff49e2364503659cc520d7f2e5650906",
"text": "Linguists are increasingly using experiments to provide insight into linguistic representations and linguistic processing. But linguists are rarely trained to think experimentally, and designing a carefully controlled study is not trivial. This paper provides a practical introduction to experiments. We examine issues in experimental design and survey several methodologies. The goal is to provide readers with some tools for understanding and evaluating the rapidly growing literature using experimental methods, as well as for beginning to design experiments in their own research. © 2013 The Author. Language and Linguistics Compass © 2013 Blackwell Publishing Ltd.",
"title": ""
},
{
"docid": "1298ddbeea84f6299e865708fd9549a6",
"text": "Since its invention in the early 1960s (Rotman and Turner, 1963), the Rotman Lens has proven itself to be a useful beamformer for designers of electronically scanned arrays. Inherent in its design is a true time delay phase shift capability that is independent of frequency and removes the need for costly phase shifters to steer a beam over wide angles. The Rotman Lens has a long history in military radar, but it has also been used in communication systems. This article uses the developed software to design and analyze a microstrip Rotman Lens for the Ku band. The initial lens design will come from a tool based on geometrical optics (GO). A second stage of analysis will be performed using a full wave finite difference time domain (FDTD) solver. The results between the first-cut design tool and the comprehensive FDTD solver will be compared, and some of the design trades will be explored to gauge their impact on the performance of the lens.",
"title": ""
},
{
"docid": "3826081b95447b442cda323ee7df39c4",
"text": "Despite the great success of convolutional neural networks (CNN) for the image classification task on datasets like Cifar and ImageNet, CNN’s representation power is still somewhat limited in dealing with object images that have large variation in size and clutter, where Fisher Vector (FV) has shown to be an effective encoding strategy. FV encodes an image by aggregating local descriptors with a universal generative Gaussian Mixture Model (GMM). FV however has limited learning capability and its parameters are mostly fixed after constructing the codebook. To combine together the best of the two worlds, we propose in this paper a neural network structure with FV layer being part of an end-to-end trainable system that is differentiable; we name our network FisherNet that is learnable using backpropagation. Our proposed FisherNet combines convolutional neural network training and Fisher Vector encoding in a single end-to-end structure. We observe a clear advantage of FisherNet over plain CNN and standard FV in terms of both classification accuracy and computational efficiency on the challenging PASCAL VOC object classification task.",
"title": ""
},
{
"docid": "cdf313ff69ebd11b360cd5e3b3942580",
"text": "This paper presents, for the first time, a novel pupil detection method for near-infrared head-mounted cameras, which relies not only on image appearance to pursue the shape and gradient variation of the pupil contour, but also on structure principle to explore the mechanism of pupil projection. There are three main characteristics in the proposed method. First, in order to complement the pupil projection information, an eyeball center calibration method is proposed to build an eye model. Second, by utilizing the deformation model of pupils under head-mounted cameras and the edge gradients of a circular pattern, we find the best fitting ellipse describing the pupil boundary. Third, an eye-model-based pupil fitting algorithm with only three parameters is proposed to fine-tune the final pupil contour. Consequently, the proposed method extracts the geometry-appearance information, effectively boosting the performance of pupil detection. Experimental results show that this method outperforms the state-of-the-art ones. On a widely used public database (LPW), our method achieves 72.62% in terms of detection rate up to an error of five pixels, which is superior to the previous best one.",
"title": ""
},
{
"docid": "05eb1af3e6838640b6dc5c1c128cc78a",
"text": "Predicting the success of referring expressions (RE) is vital for real-world applications such as navigation systems. Traditionally, research has focused on studying Referring Expression Generation (REG) in virtual, controlled environments. In this paper, we describe a novel study of spatial references from real scenes rather than virtual. First, we investigate how humans describe objects in open, uncontrolled scenarios and compare our findings to those reported in virtual environments. We show that REs in real-world scenarios differ significantly to those in virtual worlds. Second, we propose a novel approach to quantifying image complexity when complete annotations are not present (e.g. due to poor object recognition capabitlities), and third, we present a model for success prediction of REs for objects in real scenes. Finally, we discuss implications for Natural Language Generation (NLG) systems and future directions.",
"title": ""
},
{
"docid": "b48e3a6bca79fa3737ad5635e5a25e83",
"text": "A new low-cost design for a planar balanced doubler with ultra-wide bandwidth is presented. This doubler utilizes two types of ultra-wideband transitions: microstrip-to-CPW (coplanar waveguide) and microstrip-to-CPS (coplanar stripline) transitions. The transitions are designed to provide field and impedance matching between adjacent transmission lines. The fabricated doubler provides less than 10 dB conversion loss for output frequencies from 9 GHz to 26 GHz and less than 12 dB conversion loss from 7 GHz to 38 GHz.",
"title": ""
},
{
"docid": "c66f67cf3693690505c087ba8667e38c",
"text": "An earlier study compared audiovisual perception of speech ’produced in environmental noise’ (Lombard speech) and speech ’produced in quiet’ with the same environmental noise added. The results and showed that listeners make differential use of the visual information depending on the recording condition, but gave no indication of how or why this might be so. A possible confound in that study was that high audio presentation levels might account for the small visual enhancements observed for Lombard speech. This paper reports results for a second perception study using much lower acoustic presentation levels, compares them with the results of the previous study, and integrates the perception results with analyses of the audiovisual production data: face and head motion, audio amplitude (RMS), and parameters of the spectral acoustics (line spectrum pairs).",
"title": ""
},
{
"docid": "d063f8a20e2b6522fe637794e27d7275",
"text": "Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words.\n The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.",
"title": ""
},
{
"docid": "019d82f4c24ee08c3ee5593a18add850",
"text": "A new approach for real-time scene text recognition is proposed in this paper. A novel binary convolutional encoderdecoder network (B-CEDNet) together with a bidirectional recurrent neural network (Bi-RNN). The B-CEDNet is engaged as a visual front-end to provide elaborated character detection, and a back-end Bi-RNN performs characterlevel sequential correction and classification based on learned contextual knowledge. The front-end B-CEDNet can process multiple regions containing characters using a one-off forward operation, and is trained under binary constraints with significant compression. Hence it leads to both remarkable inference run-time speedup as well as memory usage reduction. With the elaborated character detection, the back-end Bi-RNN merely processes a low dimension feature sequence with category and spatial information of extracted characters for sequence correction and classification. By training with over 1,000,000 synthetic scene text images, the B-CEDNet achieves a recall rate of 0.86, precision of 0.88 and F-score of 0.87 on ICDAR-03 and ICDAR-13. With the correction and classification by Bi-RNN, the proposed real-time scene text recognition achieves state-of-the-art accuracy while only consumes less than 1-ms inference run-time. The flow processing flow is realized on GPU with a small network size of 1.01 MB for B-CEDNet and 3.23 MB for Bi-RNN, which is much faster and smaller than the existing solutions. Introduction The success of convolutional neural network (CNN) has resulted in a potential general machine learning engine for various computer vision applications (LeCun et al. 1998; Krizhevsky, Sutskever, and Hinton 2012), such as text detection, recognition and interpretation from images. Applications, such as Advanced Driver Assistance System (ADAS) for road signs with text, however, require a real-time processing capability that is beyond the existing approaches (Jaderberg et al. 2014; Jaderberg, Vedaldi, and Zisserman 2014) in terms of processing functionality, efficiency and latency. For a real-time scene text recognition application, one needs a method with memory efficiency and fast processing time. In this paper, we reveal that binary features (Courbariaux and Bengio 2016) can effectively and efficiently Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. represent the scene text image. Combining with deconvolution technique, we introduce a binary convolutional encoderdecoder network (B-CEDNet) for real-time one-shot character detection and recognition. The scene text recognition is further enhanced with a back-end character-level sequential correction and classification, based on a bidirectional recurrent neural network (Bi-RNN). Instead of detecting characters sequentially (Bissacco et al. 2013; Wang et al. 2012; Shi, Bai, and Yao 2015), our proposed method, called SqueezedText, can detect multiple characters simultaneously and extracts a length-variable character sequence with corresponding spatial information. This sequence will be subsequently fed into a Bi-RNN, which then learns the detection error characteristics from the previous stage to provides characterlevel correction and classification based on the spatial and contextual cues. By training with over 1,000,000 synthetic scene text images, the proposed SqueezedText can achieve recall rate of 0.86, precision of 0.88 and F-score of 0.87 on ICDAR-03 (Lucas et al. 2003) dataset. More importantly, it achieves state-of-the-art accuracy of 93.8%, 92.7%, 94.3% 96.1% and 83.6% on ICDAR-03, ICDAR-13, IIIT5K, STV and Synthe90K datasets. SqueezedText is realized on GPU with a small network size of 1.01 MB for B-CEDNet and 3.23 MB for Bi-RNN; and consumes less than 1 ms inference runtime on average. It is up to 4× faster and 6× smaller than state-of-the-art work. The contributions of this paper are summarized as follows: • We propose a novel binary convolutional encoder-decoder neural network model, which acts as a visual front-end module to provide unconstrained scene text detection and recognition. It effectively detects individual character with high recall rate, realizing an extremely fast run-time speed and small memory consumption. • We reveal that the text features can be learned and encoded in binary format without loss of discriminative information. This information can be further decoded and recovered to perform multi-character detection and recognition in parallel. • We further design a back-end bidirectional RNN (BiRNN) to provide fast and robust scene text recognition with correction and classification.",
"title": ""
},
{
"docid": "209628a716a3e81e91f2931fae4f355d",
"text": "The effects of ṫ̇raining and/or ageing upon maximal oxygen uptake (V̇O2max) and heart rate values at rest (HRrest) and maximal exercise (HRmax), respectively, suggest a relationship between V̇O2max and the HRmax-to-HRrest ratio which may be of use for indirect testing of V̇O2max. Fick principle calculations supplemented by literature data on maximum-to-rest ratios for stroke volume and the arterio-venous O2 difference suggest that the conversion factor between mass-specific V̇O2max (ml·min−1·kg−1) and HRmax·HRrest −1 is ~15. In the study we experimentally examined this relationship and evaluated its potential for prediction of V̇O2max. V̇O2max was measured in 46 well-trained men (age 21–51 years) during a treadmill protocol. A subgroup (n=10) demonstrated that the proportionality factor between HRmax·HRrest −1 and mass-specific V̇O2max was 15.3 (0.7) ml·min−1·kg−1. Using this value, V̇O2max in the remaining 36 individuals could be estimated with an SEE of 0.21 l·min−1 or 2.7 ml·min−1·kg−1 (~4.5%). This compares favourably with other common indirect tests. When replacing measured HRmax with an age-predicted one, SEE was 0.37 l·min−1 and 4.7 ml·min−1·kg−1 (~7.8%), which is still comparable with other indirect tests. We conclude that the HRmax-to-HRrest ratio may provide a tool for estimation of V̇O2max in well-trained men. The applicability of the test principle in relation to other groups will have to await direct validation. V̇O2max can be estimated indirectly from the measured HRmax-to-HRrest ratio with an accuracy that compares favourably with that of other common indirect tests. The results also suggest that the test may be of use for V̇O2max estimation based on resting measurements alone.",
"title": ""
},
{
"docid": "5e135da54b6ba5e9005d61bd64bbd2c9",
"text": "A miniaturized Marchand balun combiner is proposed for a W-band power amplifier (PA). The proposed combiner reduces the electrical length of the transmission lines (transmission line) from about 80 <sup>°</sup> to 30 <sup>°</sup>, when compared with a conventional Marchand balun combiner. Implemented in a 1-V 65-nm CMOS process, the presented PA achieves a measured saturated output power of 11.9 dBm and a peak power-added efficiency of 9.0% at 87 GHz. The total chip area (with pads) is 0.77×0.48 mm<sup>2</sup>, where the size of the balun combiner is only 0.36×0.13 mm<sup>2</sup>.",
"title": ""
},
{
"docid": "f61a7e280cffe673a9068cf33fd6f803",
"text": "Enterprise Resource Planning (ERP) systems are highly integrated enterprise-wide information systems that automate core business processes. The ERP packages of vendors such as SAP, Baan, J.D. Edwards, Peoplesoft and Intentia represent more than a standard business platform, they prescribe information blueprints of how an organisation’s business processes should operate. In this paper the scale and strategic importance of ERP systems are identified and the problem of ERP implementation is defined. A Critical Success Factors (CSFs) framework is proposed to aid managers develop an ERP implementation strategy. The framework is illustrated using two case examples from a research sample of eight companies. The case analysis highlights the critical impact of legacy systems upon the implementation process, the importance of selecting an appropriate ERP strategy and identifies the importance of Business Process Change (BPC) and software configuration in addition to factors already cited in the literature. The implications of the results for managerial practice are described and future research opportunities are outlined.",
"title": ""
},
{
"docid": "4028f1eb3f14297fea30ae43fdf7fbb6",
"text": "The optimisation of a tail-sitter UAV (Unmanned Aerial Vehicle) that uses a stall-tumble manoeuvre to transition from vertical to horizontal flight and a pull-up manoeuvre to regain the vertical is investigated. The tandem wing vehicle is controlled in the hover and vertical flight phases by prop-wash over wing mounted control surfaces. It represents an innovative and potentially simple solution to the dual requirements of VTOL (Vertical Take-off and Landing) and high speed forward flight by obviating the need for complex mechanical systems such as rotor heads or tilt-rotor systems.",
"title": ""
},
{
"docid": "8bef38182a5cd0f05cfa9c51887c74f5",
"text": "The limited number of oral vaccines currently approved for use in humans and veterinary species clearly illustrates that development of efficacious and safe oral vaccines has been a challenge not only for fish immunologists. The insufficient efficacy of oral vaccines is partly due to antigen breakdown in the harsh gastric environment, but also to the high tolerogenic gut environment and to inadequate vaccine design. In this review we discuss current approaches used to develop oral vaccines for mass vaccination of farmed fish species. Furthermore, using various examples from the human and veterinary vaccine development, we propose additional approaches to fish vaccine design also considering recent advances in fish mucosal immunology and novel molecular tools. Finally, we discuss the pros and cons of using the zebrafish as a pre-screening animal model to potentially speed up vaccine design and testing for aquaculture fish species.",
"title": ""
},
{
"docid": "ce463006a11477c653c15eb53f673837",
"text": "This paper presents a meaning-based statistical math word problem (MWP) solver with understanding, reasoning and explanation. It comprises a web user interface and pipelined modules for analysing the text, transforming both body and question parts into their logic forms, and then performing inference on them. The associated context of each quantity is represented with proposed role-tags (e.g., nsubj, verb, etc.), which provides the flexibility for annotating the extracted math quantity with its associated syntactic and semantic information (which specifies the physical meaning of that quantity). Those role-tags are then used to identify the desired operands and filter out irrelevant quantities (so that the answer can be obtained precisely). Since the physical meaning of each quantity is explicitly represented with those role-tags and used in the inference process, the proposed approach could explain how the answer is obtained in a human comprehensible way.",
"title": ""
},
{
"docid": "d438491c76e6afcdd7ad9a6351f1fda8",
"text": "Acoustic word embeddings — fixed-dimensional vector representations of variable-length spoken word segments — have begun to be considered for tasks such as speech recognition and query-by-example search. Such embeddings can be learned discriminatively so that they are similar for speech segments corresponding to the same word, while being dissimilar for segments corresponding to different words. Recent work has found that acoustic word embeddings can outperform dynamic time warping on query-by-example search and related word discrimination tasks. However, the space of embedding models and training approaches is still relatively unexplored. In this paper we present new discriminative embedding models based on recurrent neural networks (RNNs). We consider training losses that have been successful in prior work, in particular a cross entropy loss for word classification and a contrastive loss that explicitly aims to separate same-word and different-word pairs in a “Siamese network” training setting. We find that both classifier-based and Siamese RNN embeddings improve over previously reported results on a word discrimination task, with Siamese RNNs outperforming classification models. In addition, we present analyses of the learned embeddings and the effects of variables such as dimensionality and network structure.",
"title": ""
}
] | scidocsrr |
f774a5e356a6460e24685ecf50fc1d06 | The role of orienting in vibrissal touch sensing | [
{
"docid": "0d723c344ab5f99447f7ad2ff72c0455",
"text": "The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (e.g. kettle and lid), and checking the state of some variable (e.g. water level). We conclude that although the actions of tea-making are 'automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life.",
"title": ""
}
] | [
{
"docid": "625f54bb3157e429af1af8f0d04f0713",
"text": "Proof theory is a powerful tool for understanding computational phenomena, as most famously exemplified by the Curry–Howard isomorphism between intuitionistic logic and the simply-typed λ-calculus. In this paper, we identify a fragment of intuitionistic linear logic with least fixed points and establish a Curry–Howard isomorphism between a class of proofs in this fragment and deterministic finite automata. Proof-theoretically, closure of regular languages under complementation, union, and intersection can then be understood in terms of cut elimination. We also establish an isomorphism between a different class of proofs and subsequential string transducers. Because prior work has shown that linear proofs can be seen as session-typed processes, a concurrent semantics of transducer composition is obtained for free. 1998 ACM Subject Classification F.4.1 Mathematical Logic; F.1.1 Models of Computation",
"title": ""
},
{
"docid": "8b764c3b6576e8334979503d9d76a8d3",
"text": "Twitter is a well-known micro-blogging website which allows millions of users to interact over different types of communities, topics, and tweeting trends. The big data being generated on Twitter daily, and its significant impact on social networking, has motivated the application of data mining (analysis) to extract useful information from tweets. In this paper, we analyze the impact of tweets in predicting the winner of the recent 2013 election held in Pakistan. We identify relevant Twitter users, pre-process their tweets, and construct predictive models for three representative political parties which were significantly tweeted, i.e., Pakistan Tehreek-e-Insaaf (PTI), Pakistan Muslim League Nawaz (PMLN), and Muttahida Qaumi Movement (MQM). The predictions for last four days before the elections showed that PTI will emerge as the election winner, which was actually won by PMLN. However, considering that PTI obtained landslide victory in one province and bagged several important seats across the country, we conclude that Twitter can have some type of a positive influence on the election result, although it cannot be considered representative of the overall voting population.",
"title": ""
},
{
"docid": "ec5e3b472973e3f77812976b1dd300a5",
"text": "In this thesis we investigate different methods of automating behavioral analysis in animal videos using shapeand motion-based models, with a focus on classifying large datasets of rodent footage. In order to leverage the recent advances in deep learning techniques a massive number of training samples is required, which has lead to the development of a data transfer pipeline to gather footage from multiple video sources and a custom-built web-based video annotation tool to create annotation datasets. Finally we develop and compare new deep convolutional and recurrent-convolutional neural network architectures that outperform existing systems.",
"title": ""
},
{
"docid": "2749fc2afe66efab60abc7ca33cc666a",
"text": "The pure methods in a program are those that exhibit functional or side effect free behaviour, a useful property in many contexts. However, existing purity investigations present primarily staticresults. We perform a detailed examination of dynamic method purityin Java programs using a JVM-based analysis. We evaluate multiple purity definitions that range from strong to weak, consider purity forms specific to dynamic execution, and accomodate constraintsimposed by an example consumer application, memoization. We show that while dynamic method purity is actually fairly consistent between programs, examining pure invocation counts and the percentage of the byte code instruction stream contained within some pure method reveals great variation. We also show that while weakening purity definitions exposes considerable dynamic purity, consumer requirements can limitthe actual utility of this information.",
"title": ""
},
{
"docid": "c50cf41ef8cc85be0558f9132c60b1f5",
"text": "A System Architecture for Context-Aware Mobile Computing William Noah Schilit Computer applications traditionally expect a static execution environment. However, this precondition is generally not possible for mobile systems, where the world around an application is constantly changing. This thesis explores how to support and also exploit the dynamic configurations and social settings characteristic of mobile systems. More specifically, it advances the following goals: (1) enabling seamless interaction across devices; (2) creating physical spaces that are responsive to users; and (3) and building applications that are aware of the context of their use. Examples of these goals are: continuing in your office a program started at home; using a PDA to control someone else’s windowing UI; automatically canceling phone forwarding upon return to your office; having an airport overheaddisplay highlight the flight information viewers are likely to be interested in; easily locating and using the nearest printer or fax machine; and automatically turning off a PDA’s audible e-mail notification when in a meeting. The contribution of this thesis is an architecture to support context-aware computing; that is, application adaptation triggered by such things as the location of use, the collection of nearby people, the presence of accessible devices and other kinds of objects, as well as changes to all these things over time. Three key issues are addressed: (1) the information needs of applications, (2) where applications get various pieces of information and (3) how information can be efficiently distributed. A dynamic environment communication model is introduced as a general mechanism for quickly and efficiently learning about changes occurring in the environment in a fault tolerant manner. For purposes of scalability, multiple dynamic environment servers store user, device, and, for each geographic region, context information. In order to efficiently disseminate information from these components to applications, a dynamic collection of multicast groups is employed. The thesis also describes a demonstration system based on the Xerox PARCTAB, a wireless palmtop computer.",
"title": ""
},
{
"docid": "0e4cf084d126a0c87e88e3e95ec2cf42",
"text": "Owing to the increasing importance of genomic information, obtaining genomic DNA easily from biological specimens has become more and more important. This article proposes an efficient method for obtaining genomic DNA from nail clippings. Nail clippings can be easily obtained, are thermostable and easy to transport, and have low infectivity. The drawback of their use, however, has been the difficulty of extracting genomic material from them. We have overcome this obstacle using the protease solution obtained from Cucumis melo. The keratinolytic activity of the protease solution was 1.78-fold higher than that of proteinase K, which is commonly used to degrade keratin. With the protease solution, three times more DNA was extracted than when proteinase K was used. In order to verify the integrity of the extracted DNA, genotype analysis on 170 subjects was performed by both PCR-RFLP and Real Time PCR. The results of the genotyping showed that the extracted DNA was suitable for genotyping analysis. In conclusion, we have developed an efficient extraction method for using nail clippings as a genome source and a research tool in molecular epidemiology, medical diagnostics, and forensic science.",
"title": ""
},
{
"docid": "ae5976a021bd0c4ff5ce14525c1716e7",
"text": "We present PARAM 1.0, a model checker for parametric discrete-time Markov chains (PMCs). PARAM can evaluate temporal properties of PMCs and certain extensions of this class. Due to parametricity, evaluation results are polynomials or rational functions. By instantiating the parameters in the result function, one can cheaply obtain results for multiple individual instantiations, based on only a single more expensive analysis. In addition, it is possible to post-process the result function symbolically using for instance computer algebra packages, to derive optimum parameters or to identify worst cases.",
"title": ""
},
{
"docid": "30c7bc7bd823935969e6086a9e728515",
"text": "A systematic methodology for layout optimization of active devices for millimeter-wave (mm-wave) application is proposed. A hybrid mm-wave modeling technique was developed to extend the validity of the device compact models up to 100 GHz. These methods resulted in the design of a customized 90 nm device layout which yields an extrapolated of 300 GHz from an intrinsic device . The device is incorporated into a low-power 60 GHz amplifier consuming 10.5 mW, providing 12.2 dB of gain, and an output of 4 dBm. An experimental three-stage 104 GHz tuned amplifier has a measured peak gain of 9.3 dB. Finally, a Colpitts oscillator operating at 104 GHz delivers up to 5 dBm of output power while consuming 6.5 mW.",
"title": ""
},
{
"docid": "df11a24f72f6964e4ca123bc8f6e1e5e",
"text": "The matching performance of automated face recognition has significantly improved over the past decade. At the same time several challenges remain that significantly affect the deployment of such systems in security applications. In this work, we study the impact of a commonly used face altering technique that has received limited attention in the biometric literature, viz., non-permanent facial makeup. Towards understanding its impact, we first assemble two databases containing face images of subjects, before and after applying makeup. We present experimental results on both databases that reveal the effect of makeup on automated face recognition and suggest that this simple alteration can indeed compromise the accuracy of a biometric system. While these are early results, our findings clearly indicate the need for a better understanding of this face altering scheme and the importance of designing algorithms that can successfully overcome the obstacle imposed by the application of facial makeup.",
"title": ""
},
{
"docid": "c64d46b03514b427766410a0dcefe3c2",
"text": "We introduce a rate-based congestion control mechanism for Content-Centric Networking (CCN). It builds on the fact that one Interest retrieves at most one Data packet. Congestion can occur when aggregate conversations arrive in excess and fill up the transmission queue of a CCN router. We compute the available capacity of each CCN router in a distributed way in order to shape their conversations Interest rate and therefore, adjust dynamically their Data rate and transmission buffer occupancy. We demonstrate the convergence properties of this Hop-by-hop Interest Shaping mechanism (HoBHIS) and provide a performance analysis based on various scenarios using our ns2 simulation environment.",
"title": ""
},
{
"docid": "40735be327c91882fdfc2cb57ad12f37",
"text": "BACKGROUND\nPolymorphism in the gene for angiotensin-converting enzyme (ACE), especially the DD genotype, is associated with risk for cardiovascular disease. Glomerulosclerosis has similarities to atherosclerosis, and we looked at ACE gene polymorphism in patients with kidney disease who were in a trial of long-term therapy with an ACE inhibitor or a beta-blocker.\n\n\nMETHODS\n81 patients with non-diabetic renal disease had been entered into a randomised comparison of oral atenolol or enalapril to prevent progressive decline in renal function. The dose was titrated to a goal diastolic blood pressure of 10 mm Hg below baseline and/or below 95 mm Hg. The mean (SE) age was 50 (1) years, and the group included 49 men. Their renal function had been monitored over 3-4 years. We have looked at their ACE genotype, which we assessed with PCR.\n\n\nFINDINGS\n27 patients had the II genotype, 37 were ID, and 17 were DD. 11 patients were lost to follow-up over 1-3 years. The decline of glomerular filtration rate over the years was significantly steeper in the DD group than in the ID and the II groups (p = 0.02; means -3.79, -1.37, and -1.12 mL/min per year, respectively). The DD patients treated with enalapril fared as equally a bad course as the DD patients treated with atenolol. Neither drug lowered the degree of proteinuria in the DD group.\n\n\nINTERPRETATION\nOur data show that patients with the DD genotype are resistant to commonly advocated renoprotective therapy.",
"title": ""
},
{
"docid": "8eb84b8d29c8f9b71c92696508c9c580",
"text": "We introduce a novel in-ear sensor which satisfies key design requirements for wearable electroencephalography (EEG)-it is discreet, unobtrusive, and capable of capturing high-quality brain activity from the ear canal. Unlike our initial designs, which utilize custom earpieces and require a costly and time-consuming manufacturing process, we here introduce the generic earpieces to make ear-EEG suitable for immediate and widespread use. Our approach represents a departure from silicone earmoulds to provide a sensor based on a viscoelastic substrate and conductive cloth electrodes, both of which are shown to possess a number of desirable mechanical and electrical properties. Owing to its viscoelastic nature, such an earpiece exhibits good conformance to the shape of the ear canal, thus providing stable electrode-skin interface, while cloth electrodes require only saline solution to establish low impedance contact. The analysis highlights the distinguishing advantages compared with the current state-of-the-art in ear-EEG. We demonstrate that such a device can be readily used for the measurement of various EEG responses.",
"title": ""
},
{
"docid": "70e88fe5fc43e0815a1efa05e17f7277",
"text": "Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas. Many commercial smoke detection sensors exist but most of them cannot be applied in open space or outdoor scenarios. With this aim, the paper presents a smoke detection system that uses a common CCD camera sensor to detect smoke in images and trigger alarms. First, a proper background model is proposed to reliably extract smoke regions and avoid over-segmentation and false positives in outdoor scenarios where many distractors are present, such as moving trees or light reflexes. A novel Bayesian approach is adopted to detect smoke regions in the scene analyzing image energy by means of the Wavelet Transform coefficients and Color Information. A statistical model of image energy is built, using a temporal Gaussian Mixture, to analyze the energy decay that typically occurs when smoke covers the scene then the detection is strengthen evaluating the color blending between a reference smoke color and the input frame. The proposed system is capable of detecting rapidly smoke events both in night and in day conditions with a reduced number of false alarms hence is particularly suitable for monitoring large outdoor scenarios where common sensors would fail. An extensive experimental campaign both on recorded videos and live cameras evaluates the efficacy and efficiency of the system in many real world scenarios, such as outdoor storages and forests.",
"title": ""
},
{
"docid": "e5380801d69c3acf7bfe36e868b1dadb",
"text": "Skin-mountable chemical sensors using flexible chemically sensitive nanomaterials are of great interest for electronic skin (e-skin) application. To build these sensors, the emerging atomically thin two-dimensional (2D) layered semiconductors could be a good material candidate. Herein, we show that a large-area WS2 film synthesized by sulfurization of a tungsten film exhibits high humidity sensing performance both in natural flat and high mechanical flexible states (bending curvature down to 5 mm). The conductivity of as-synthesized WS2 increases sensitively over a wide relative humidity range (up to 90%) with fast response and recovery times in a few seconds. By using graphene as electrodes and thin polydimethylsiloxane (PDMS) as substrate, a transparent, flexible, and stretchable humidity sensor was fabricated. This senor can be well laminated onto skin and shows stable water moisture sensing behaviors in the undeformed relaxed state as well as under compressive and tensile loadings. Furthermore, its high sensing performance enables real-time monitoring of human breath, indicating a potential mask-free breath monitoring for healthcare application. We believe that such a skin-activity compatible WS2 humidity sensor may shed light on developing low power consumption wearable chemical sensors based on 2D semiconductors.",
"title": ""
},
{
"docid": "de119196672efda310f457b15f0b6e63",
"text": "Agile processes focus on facilitating early and fast production of working code, and are based on software development process models that support iterative, incremental development of software. Although agile methods have existed for a number of years now, answers to questions concerning the suitability of agile processes to particular software development environments are still often based on anecdotal accounts of experiences. An appreciation of the (often unstated) assumptions underlying agile processes can lead to a better understanding of the applicability of agile processes to particular situations. Agile processes are less likely to be applicable in situations in which core assumptions do not hold. This paper examines the principles and advocated practices of agile processes to identify underlying assumptions. The paper also identifies limitations that may arise from these assumptions and outlines how the limitations can be addresses by incorporating other software development techniques and practices into agile development environments.",
"title": ""
},
{
"docid": "1e607279360f3318f3f020e19e1bd86f",
"text": "Only one late period is allowed for this homework (11:59pm 2/23). Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Warning: This problem requires substantial computing time (it can be a few hours on some systems). Don't start it at the last minute. 7 7 7 The goal of this problem is to implement the Stochastic Gradient Descent algorithm to build a Latent Factor Recommendation system. We can use it to recommend movies to users.",
"title": ""
},
{
"docid": "488c7437a32daec6fbad12e07bb31f4c",
"text": "Studying characters plays a vital role in computationally representing and interpreting narratives. Unlike previous work, which has focused on inferring character roles, we focus on the problem of modeling their relationships. Rather than assuming a fixed relationship for a character pair, we hypothesize that relationships temporally evolve with the progress of the narrative, and formulate the problem of relationship modeling as a structured prediction problem. We propose a semisupervised framework to learn relationship sequences from fully as well as partially labeled data. We present a Markovian model capable of accumulating historical beliefs about the relationship and status changes. We use a set of rich linguistic and semantically motivated features that incorporate world knowledge to investigate the textual content of narrative. We empirically demonstrate that such a framework outperforms competitive baselines.",
"title": ""
},
{
"docid": "22d9a5bbb35890bfbe4fb64e289d102b",
"text": "A secure slip knot is very important in the field of arthroscopy. The new giant knot, developed by the first author, has the properties of being a one-way self-locking slip knot, which is secured without additional half hitches and can tolerate higher forces to be untied.",
"title": ""
},
{
"docid": "af740d54f1b6d168500934a089a1adc8",
"text": "Abstract In this paper, unsteady laminar flow around a circular cylinder has been studied. Navier-stokes equations solved by Simple C algorithm exerted to specified structured and unstructured grids. Equations solved by staggered method and discretization of those done by upwind method. The mean drag coefficient, lift coefficient and strouhal number are compared from current work at three different Reynolds numbers with experimental and numerical values.",
"title": ""
},
{
"docid": "8be1a6ae2328bbcc2d0265df167ecbb3",
"text": "It is increasingly necessary for researchers in all fields to write computer code, and in order to reproduce research results, it is important that this code is published. We present Jupyter notebooks, a document format for publishing code, results and explanations in a form that is both readable and executable. We discuss various tools and use cases for notebook documents.",
"title": ""
}
] | scidocsrr |
4e0f6f8bd82c58416245439a3e0b0086 | Emotional memories are not all created equal: evidence for selective memory enhancement. | [
{
"docid": "7d26c09bf274ae41f19a6aafc6a43d18",
"text": "Converging findings of animal and human studies provide compelling evidence that the amygdala is critically involved in enabling us to acquire and retain lasting memories of emotional experiences. This review focuses primarily on the findings of research investigating the role of the amygdala in modulating the consolidation of long-term memories. Considerable evidence from animal studies investigating the effects of posttraining systemic or intra-amygdala infusions of hormones and drugs, as well as selective lesions of specific amygdala nuclei, indicates that (a) the amygdala mediates the memory-modulating effects of adrenal stress hormones and several classes of neurotransmitters; (b) the effects are selectively mediated by the basolateral complex of the amygdala (BLA); (c) the influences involve interactions of several neuromodulatory systems within the BLA that converge in influencing noradrenergic and muscarinic cholinergic activation; (d) the BLA modulates memory consolidation via efferents to other brain regions, including the caudate nucleus, nucleus accumbens, and cortex; and (e) the BLA modulates the consolidation of memory of many different kinds of information. The findings of human brain imaging studies are consistent with those of animal studies in suggesting that activation of the amygdala influences the consolidation of long-term memory; the degree of activation of the amygdala by emotional arousal during encoding of emotionally arousing material (either pleasant or unpleasant) correlates highly with subsequent recall. The activation of neuromodulatory systems affecting the BLA and its projections to other brain regions involved in processing different kinds of information plays a key role in enabling emotionally significant experiences to be well remembered.",
"title": ""
}
] | [
{
"docid": "28b2da27bf62b7989861390a82940d88",
"text": "End users are said to be “the weakest link” in information systems (IS) security management in the workplace. they often knowingly engage in certain insecure uses of IS and violate security policies without malicious intentions. Few studies, however, have examined end user motivation to engage in such behavior. to fill this research gap, in the present study we propose and test empirically a nonmalicious security violation (NMSV) model with data from a survey of end users at work. the results suggest that utilitarian outcomes (relative advantage for job performance, perceived security risk), normative outcomes (workgroup norms), and self-identity outcomes (perceived identity match) are key determinants of end user intentions to engage in NMSVs. In contrast, the influences of attitudes toward security policy and perceived sanctions are not significant. this study makes several significant contributions to research on security-related behavior by (1) highlighting the importance of job performance goals and security risk perceptions on shaping user attitudes, (2) demonstrating the effect of workgroup norms on both user attitudes and behavioral intentions, (3) introducing and testing the effect of perceived identity match on user attitudes and behavioral intentions, and (4) identifying nonlinear relationships between constructs. this study also informs security management practices on the importance of linking security and business objectives, obtaining user buy-in of security measures, and cultivating a culture of secure behavior at local workgroup levels in organizations. KeY words and PHrases: information systems security, nonlinear construct relationships, nonmalicious security violation, perceived identity match, perceived security risk, relative advantage for job performance, workgroup norms. information sYstems (is) securitY Has become a major cHallenGe for organizations thanks to the increasing corporate use of the Internet and, more recently, wireless networks. In the 2010 computer Security Institute (cSI) survey of computer security practitioners in u.S. organizations, more than 41 percent of the respondents reported security incidents [68]. In the united Kingdom, a similar survey found that 45 percent of the participating companies had security incidents in 2008 [37]. While the causes for these security incidents may be difficult to fully identify, it is generally understood that insiders from within organizations pose a major threat to IS security [36, 55]. For example, peer-to-peer file-sharing software installed by employees may cause inadvertent disclosure of sensitive business information over the Internet [41]. Employees writing down passwords on a sticky note or choosing easy-to-guess passwords may risk having their system access privilege be abused by others [98]. the 2010 cSI survey found that nonmalicious insiders are a big issue [68]. according to the survey, more than 14 percent of the respondents reported that nearly all their losses were due to nonmalicious, careless behaviors of insiders. Indeed, end users are often viewed as “the weakest link” in the IS security chain [73], and fundamentally IS security has a “behavioral root” [94]. uNDErStaNDING NONMalIcIOuS SEcurItY VIOlatIONS IN tHE WOrKPlacE 205 a frequently recommended organizational measure for dealing with internal threats posed by end user behavior is security policy [6]. For example, a security policy may specify what end users should (or should not) do with organizational IS assets, and it may also spell out the consequences of policy violations. Having a policy in place, however, does not necessarily guarantee security because end users may not always act as prescribed [7]. a practitioner survey found that even if end users were aware of potential security problems related to their actions, many of them did not follow security best practices and continued to engage in behaviors that could open their organizations’ IS to serious security risks [62]. For example, the survey found that many employees allowed others to use their computing devices at work despite their awareness of possible security implications. It was also reported that many end users do not follow policies and some of them knowingly violate policies without worry of repercussions [22]. this phenomenon raises an important question: What factors motivate end users to engage in such behaviors? the role of motivation has not been considered seriously in the IS security literature [75] and our understanding of the factors that motivate those undesirable user behaviors is still very limited. to fill this gap, the current study aims to investigate factors that influence end user attitudes and behavior toward organizational IS security. the rest of the paper is organized as follows. In the next section, we review the literature on end user security-related behaviors. We then propose a theoretical model of nonmalicious security violation and develop related hypotheses. this is followed by discussions of our research methods and data analysis. In the final section, we discuss our findings, implications for research and practice, limitations, and further research directions.",
"title": ""
},
{
"docid": "6f872a7e9620cff3b1cc4b75a04b09a5",
"text": "Effective management of asthma and other respiratory diseases requires constant monitoring and frequent data collection using a spirometer and longitudinal analysis. However, even after three decades of clinical use, there are very few personalized spirometers available on the market, especially those connecting to smartphones. To address this problem, we have developed mobileSpiro, a portable, low-cost spirometer intended for patient self-monitoring. The mobileSpiro API, and the accompanying Android application, interfaces with the spirometer hardware to capture, process and analyze the data. Our key contributions are automated algorithms on the smartphone which play a technician's role in detecting erroneous patient maneuvers, ensuring data quality, and coaching patients with easy-to-understand feedback, all packaged as an Android app. We demonstrate that mobileSpiro is as accurate as a commercial ISO13485 device, with an inter-device deviation in flow reading of less than 8%, and detects more than 95% of erroneous cough maneuvers in a public CDC dataset.",
"title": ""
},
{
"docid": "ed28faf2ff89ac4da642593e1b7eef9c",
"text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.",
"title": ""
},
{
"docid": "115e2a6c5f8fdd3a8a720fcdf0cf3a6d",
"text": "In this work we present an Artificial Neural Network (ANN) approach to predict stock market indices. In particular, we focus our attention on their trend movement up or down. We provide results of experiments exploiting different Neural Networks architectures, namely the Multi-layer Perceptron (MLP), the Convolutional Neural Networks (CNN), and the Long Short-Term Memory (LSTM) recurrent neural networks technique. We show importance of choosing correct input features and their preprocessing for learning algorithm. Finally we test our algorithm on the S&P500 and FOREX EUR/USD historical time series, predicting trend on the basis of data from the past n days, in the case of S&P500, or minutes, in the FOREX framework. We provide a novel approach based on combination of wavelets and CNN which outperforms basic neural networks approaches. Key–Words: Artificial neural networks, Multi-layer neural network, Convolutional neural network, Long shortterm memory, Recurrent neural network, Deep Learning, Stock markets, Time series analysis, financial forecasting",
"title": ""
},
{
"docid": "5236f684bc0fdf11855a439c9d3256f6",
"text": "The smart home is an environment, where heterogeneous electronic devices and appliances are networked together to provide smart services in a ubiquitous manner to the individuals. As the homes become smarter, more complex, and technology dependent, the need for an adequate security mechanism with minimum individual’s intervention is growing. The recent serious security attacks have shown how the Internet-enabled smart homes can be turned into very dangerous spots for various ill intentions, and thus lead the privacy concerns for the individuals. For instance, an eavesdropper is able to derive the identity of a particular device/appliance via public channels that can be used to infer in the life pattern of an individual within the home area network. This paper proposes an anonymous secure framework (ASF) in connected smart home environments, using solely lightweight operations. The proposed framework in this paper provides efficient authentication and key agreement, and enables devices (identity and data) anonymity and unlinkability. One-time session key progression regularly renews the session key for the smart devices and dilutes the risk of using a compromised session key in the ASF. It is demonstrated that computation complexity of the proposed framework is low as compared with the existing schemes, while security has been significantly improved.",
"title": ""
},
{
"docid": "521b34bd63b04757f9f2235edda57d33",
"text": "Recently, Cloud Computing introduces some new concepts that entirely change the way applications are built and deployed. Usually, Cloud systems rely on virtualization techniques to allocate computing resources on demand. Thus, scalability is a critical issue to the success of enterprises involved in doing business on the cloud. In this paper, we will describe the novel virtual cluster architecture for dynamic scaling of cloud applications in a virtualized Cloud Computing environment. An auto-scaling algorithm for automated provisioning and balancing of virtual machine resources based on active application sessions will be introduced. Also, the energy cost is considered in the proposed algorithm. Our work has demonstrated the proposed algorithm is capable of handling sudden load requirements, maintaining higher resource utilization and reducing energy cost.",
"title": ""
},
{
"docid": "358faa358eb07b8c724efcdb72334dc7",
"text": "We present a novel simple technique for rapidly creating and presenting interactive immersive 3D exploration experiences of 2D pictures and images of natural and artificial landscapes. Various application domains, ranging from virtual exploration of works of art to street navigation systems, can benefit from the approach. The method, dubbed PEEP, is motivated by the perceptual characteristics of the human visual system in interpreting perspective cues and detecting relative angles between lines. It applies to the common perspective images with zero or one vanishing points, and does not require the extraction of a precise geometric description of the scene. Taking as input a single image without other information, an automatic analysis technique fits a simple but perceptually consistent parametric 3D representation of the viewed space, which is used to drive an indirect constrained exploration method capable to provide the illusion of 3D exploration with realistic monocular (perspective and motion parallax) and binocular (stereo) depth cues. The effectiveness of the method is demonstrated on a variety of casual pictures and exploration configurations, including mobile devices.",
"title": ""
},
{
"docid": "ad131f6baec15a011252f484f1ef8f18",
"text": "Recent studies have shown that Alzheimer's disease (AD) is related to alteration in brain connectivity networks. One type of connectivity, called effective connectivity, defined as the directional relationship between brain regions, is essential to brain function. However, there have been few studies on modeling the effective connectivity of AD and characterizing its difference from normal controls (NC). In this paper, we investigate the sparse Bayesian Network (BN) for effective connectivity modeling. Specifically, we propose a novel formulation for the structure learning of BNs, which involves one L1-norm penalty term to impose sparsity and another penalty to ensure the learned BN to be a directed acyclic graph - a required property of BNs. We show, through both theoretical analysis and extensive experiments on eleven moderate and large benchmark networks with various sample sizes, that the proposed method has much improved learning accuracy and scalability compared with ten competing algorithms. We apply the proposed method to FDG-PET images of 42 AD and 67 NC subjects, and identify the effective connectivity models for AD and NC, respectively. Our study reveals that the effective connectivity of AD is different from that of NC in many ways, including the global-scale effective connectivity, intra-lobe, inter-lobe, and inter-hemispheric effective connectivity distributions, as well as the effective connectivity associated with specific brain regions. These findings are consistent with known pathology and clinical progression of AD, and will contribute to AD knowledge discovery.",
"title": ""
},
{
"docid": "335fbbf27b34e3937c2f6772b3227d51",
"text": "WordNet has facilitated important research in natural language processing but its usefulness is somewhat limited by its relatively small lexical coverage. The Paraphrase Database (PPDB) covers 650 times more words, but lacks the semantic structure of WordNet that would make it more directly useful for downstream tasks. We present a method for mapping words from PPDB to WordNet synsets with 89% accuracy. The mapping also lays important groundwork for incorporating WordNet’s relations into PPDB so as to increase its utility for semantic reasoning in applications.",
"title": ""
},
{
"docid": "d6c54837dbb1c07a0b9e2ed7b2945021",
"text": "Chatbots are software used in entertainment industry, businesses and user support. Chatbots are modeled on various techniques such as knowledge base, machine learning based. Machine learning based chatbots yields more practical results. Chatbot which gives responses based on the context of conversation tends to be more user friendly. The chatbot we are proposing demonstrates a method of developing chatbot which can follow the context of the conversation. This method uses TensorFlow for developing the neural network model of the chatbot and uses the nlp techniques to maintain the context of the conversation. This chatbots can be used in small industries or business for automating customer care as user queries will be handled by chatbots thus reducing need of human labour and expenditure.",
"title": ""
},
{
"docid": "3dbd27e460fd9d3d80967c8215e7cb29",
"text": "Transmission line sag, tension and conductor length varies with the variation of temperature due to thermal expansion and elastic elongation. Beside thermal effect, wind pressure and ice accumulation creates a horizontal and vertical loading on the conductor respectively. Such changes make the calculated data uncertain and require an uncertainty model. A novel affine arithmetic (AA) based transmission line sag, tension and conductor length calculation for parabolic curve is proposed and the proposed method is tested for different test cases. The results are compared with Monte Carlo (MC) and interval arithmetic (IA) methods. The AA based result gives a more conservative bound than MC and IA method in all the cases.",
"title": ""
},
{
"docid": "6f84dbe3cf41906b66a7b1d9fe8b0ff1",
"text": "We show that the credit quality of corporate debt issuers deteriorates during credit booms, and that this deterioration forecasts low excess returns to corporate bondholders. The key insight is that changes in the pricing of credit risk disproportionately affect the financing costs faced by low quality firms, so the debt issuance of low quality firms is particularly useful for forecasting bond returns. We show that a significant decline in issuer quality is a more reliable signal of credit market overheating than rapid aggregate credit growth. We use these findings to investigate the forces driving time-variation in expected corporate bond returns. For helpful suggestions, we are grateful to Malcolm Baker, Effi Benmelech, Dan Bergstresser, John Campbell, Sergey Chernenko, Lauren Cohen, Ian Dew-Becker, Martin Fridson, Victoria Ivashina, Chris Malloy, Andrew Metrick, Jun Pan, Erik Stafford, Luis Viceira, Jeff Wurgler, seminar participants at the 2012 AEA Annual Meetings, Columbia GSB, Dartmouth Tuck, Federal Reserve Bank of New York, Federal Reserve Board of Governors, Harvard Business School, MIT Sloan, NYU Stern, Ohio State Fisher, University of Chicago Booth, University of Pennsylvania Wharton, Washington University Olin, Yale SOM, and especially David Scharfstein, Andrei Shleifer, Jeremy Stein, and Adi Sunderam. We thank Annette Larson and Morningstar for data on bond returns and Mara Eyllon and William Lacy for research assistance. The Division of Research at the Harvard Business School provided funding.",
"title": ""
},
{
"docid": "a0eb1b462d2169f5e7fa67690169591f",
"text": "In this paper, we present 3 different neural network-based methods to perform variable selection. OCD Optimal Cell Damage is a pruning method, which evaluates the usefulness of a variable and prunes the least useful ones (it is related to the Optimal Brain Damage method of J_.e Cun et al.). Regularization theory proposes to constrain estimators by adding a term to the cost function used to train a neural network. In the Bayesian framework, this additional term can be interpreted as the log prior to the weights distribution. We propose to use two priors (a Gaussian and a Gaussian mixture) and show that this regularization approach allows to select efficient subsets of variables. Our methods are compared to conventional statistical selection procedures and are shown to significantly improve on that.",
"title": ""
},
{
"docid": "231d8ef95d02889d70000d70d8743004",
"text": "Last decade witnessed a lot of research in the field of sentiment analysis. Understanding the attitude and the emotions that people express in written text proved to be really important and helpful in sociology, political science, psychology, market research, and, of course, artificial intelligence. This paper demonstrates a rule-based approach to clause-level sentiment analysis of reviews in Ukrainian. The general architecture of the implemented sentiment analysis system is presented, the current stage of research is described and further work is explained. The main emphasis is made on the design of rules for computing sentiments.",
"title": ""
},
{
"docid": "1c56fb7d4c5998c6bfab1cb35fe21681",
"text": "With the growth of digital music, the development of music recommendation is helpful for users. The existing recommendation approaches are based on the users' preference on music. However, sometimes, recommending music according to the emotion is needed. In this paper, we propose a novel model for emotion-based music recommendation, which is based on the association discovery from film music. We investigated the music feature extraction and modified the affinity graph for association discovery between emotions and music features. Experimental result shows that the proposed approach achieves 85% accuracy in average.",
"title": ""
},
{
"docid": "bbf0611a48528c8dedfc42921832e575",
"text": "Recent low-voltage design techniques have enabled dramatic improvements in miniaturization and lifetime of wireless sensor nodes [1-3]. These systems typically use a secondary battery to provide energy when the sensor is awake and operating; the battery is then recharged from a harvesting source when the sensor is asleep. In these systems, the key requirement is to minimize energy per operation of the sensor. This extends the number of operations on one battery charge and/or reduces the time to recharge the battery between awake cycles. This requirement has driven significant advances in energy efficiency [1-2] and standby power consumption [3].",
"title": ""
},
{
"docid": "4e9d831270634e2b666450d866d4f57a",
"text": "We propose a novel class-based micro-classifier ensemble classification technique (MCE) for classifying data streams. Traditional ensemble-based data stream classification techniques build a classification model from each data chunk and keep an ensemble of such models. Due to the fixed length of the ensemble, when a new model is trained, one existing model is discarded. This creates several problems. First, if a class disappears from the stream and reappears after a long time, it would be misclassified if a majority of the classifiers in the ensemble does not contain any model of that class. Second, discarding a model means discarding the corresponding data chunk completely. However, knowledge obtained from some classes might be still useful and if they are discarded, the overall error rate would increase. To address these problems, we propose an ensemble model where each class information is stored separately. From each data chunk, we train a model for each class of data. We call each such model a micro-classifier. This approach is more robust than existing chunk-based ensembles in handling dynamic changes in the data stream. To the best of our knowledge, this is the first attempt to classify data streams using the class-based ensembles approach. When the number of classes grow in the stream, class-based ensembles may degrade in performance (speed). Hence, we sketch a cloud-based solution of our class-based ensembles to handle a large number of classes effectively. We compare our technique with several state-of-the-art data stream classification techniques on both synthetic and benchmark data streams, and obtain much higher accuracy.",
"title": ""
},
{
"docid": "8e13f75cd72aff7f7916452ff980c14f",
"text": "The software running on electronic devices is regularly updated, these days. A vehicle consists of many such devices, but is operated in a completely different manner than consumer devices. Update operations are safety critical in the automotive domain. Thus, they demand for a very well secured process. We propose an on-board security architecture which facilitates such update processes by combining hardware and software modules. In this paper, we present a protocol to show how this security architecture is employed in order to achieve secure firmware updates for automotive control units.",
"title": ""
},
{
"docid": "6162ad3612b885add014bd09baa5f07a",
"text": "The Neural Bag-of-Words (NBOW) model performs classification with an average of the input word vectors and achieves an impressive performance. While the NBOW model learns word vectors targeted for the classification task it does not explicitly model which words are important for given task. In this paper we propose an improved NBOW model with this ability to learn task specific word importance weights. The word importance weights are learned by introducing a new weighted sum composition of the word vectors. With experiments on standard topic and sentiment classification tasks, we show that (a) our proposed model learns meaningful word importance for a given task (b) our model gives best accuracies among the BOW approaches. We also show that the learned word importance weights are comparable to tf-idf based word weights when used as features in a BOW SVM classifier.",
"title": ""
},
{
"docid": "1e2e099c849b165b31b0c36040825464",
"text": "In recent years, there has been a substantial amount of research on quantum computers – machines that exploit quantum mechanical phenomena to solve mathematical problems that are difficult or intractable for conventional computers. If large-scale quantum computers are ever built, they will be able to break many of the public-key cryptosystems currently in use. This would seriously compromise the confidentiality and integrity of digital communications on the Internet and elsewhere. The goal of post-quantum cryptography (also called quantum-resistant cryptography) is to develop cryptographic systems that are secure against both quantum and classical computers, and can interoperate with existing communications protocols and networks. This Internal Report shares the National Institute of Standards and Technology (NIST)’s current understanding about the status of quantum computing and post-quantum cryptography, and outlines NIST’s initial plan to move forward in this space. The report also recognizes the challenge of moving to new cryptographic infrastructures and therefore emphasizes the need for agencies to focus on crypto agility.",
"title": ""
}
] | scidocsrr |
456ce2d909fc268a151fa6967cbfaa11 | Hierarchical Spatio-Temporal Pattern Discovery and Predictive Modeling | [
{
"docid": "55032007199b5126480d432b1c45db4a",
"text": "Concern about national security has increased after the 26/11 Mumbai attack. In this paper we look at the use of missing value and clustering algorithm for a data mining approach to help predict the crimes patterns and fast up the process of solving crime. We will concentrate on MV algorithm and Apriori algorithm with some enhancements to aid in the process of filling the missing value and identification of crime patterns. We applied these techniques to real crime data. We also use semisupervised learning technique in this paper for knowledge discovery from the crime records and to help increase the predictive accuracy. General Terms Crime data mining, MV Algorithm, Apriori Algorithm",
"title": ""
},
{
"docid": "cc1ae8daa1c1c4ee2b3b4a65ef48b6f5",
"text": "The use of entropy as a distance measure has several benefits. Amongst other things it provides a consistent approach to handling of symbolic attributes, real valued attributes and missing values. The approach of taking all possible transformation paths is discussed. We describe K*, an instance-based learner which uses such a measure, and results are presented which compare favourably with several machine learning algorithms.",
"title": ""
}
] | [
{
"docid": "c64dd1051c5b6892df08813e38285843",
"text": "Diabetes has emerged as a major healthcare problem in India. Today Approximately 8.3 % of global adult population is suffering from Diabetes. India is one of the most diabetic populated country in the world. Today the technologies available in the market are invasive methods. Since invasive methods cause pain, time consuming, expensive and there is a potential risk of infectious diseases like Hepatitis & HIV spreading and continuous monitoring is therefore not possible. Now a days there is a tremendous increase in the use of electrical and electronic equipment in the medical field for clinical and research purposes. Thus biomedical equipment’s have a greater role in solving medical problems and enhance quality of life. Hence there is a great demand to have a reliable, instantaneous, cost effective and comfortable measurement system for the detection of blood glucose concentration. Non-invasive blood glucose measurement device is one such which can be used for continuous monitoring of glucose levels in human body.",
"title": ""
},
{
"docid": "023514bca28bf91e74ebcf8e473b4573",
"text": "As a result of technological advances on robotic systems, electronic sensors, and communication techniques, the production of unmanned aerial vehicle (UAV) systems has become possible. Their easy installation and flexibility led these UAV systems to be used widely in both the military and civilian applications. Note that the capability of one UAV is however limited. Nowadays, a multi-UAV system is of special interest due to the ability of its associate UAV members either to coordinate simultaneous coverage of large areas or to cooperate to achieve common goals / targets. This kind of cooperation / coordination requires reliable communication network with a proper network model to ensure the exchange of both control and data packets among UAVs. Such network models should provide all-time connectivity to avoid the dangerous failures or unintended consequences. Thus, the multi-UAV system relies on communication to operate. In this paper, current literature about multi-UAV system regarding its concepts and challenges is presented. Also, both the merits and drawbacks of the available networking architectures and models in a multi-UAV system are presented. Flying Ad Hoc Network (FANET) is moreover considered as a sophisticated type of wireless ad hoc network among UAVs, which solved the communication problems into other network models. Along with the FANET unique features, challenges and open issues are also discussed.",
"title": ""
},
{
"docid": "3284431912c05706fe61dfc56e2a38a5",
"text": "In recent years social media have become indispensable tools for information dissemination, operating in tandem with traditional media outlets such as newspapers, and it has become critical to understand the interaction between the new and old sources of news. Although social media as well as traditional media have attracted attention from several research communities, most of the prior work has been limited to a single medium. In addition temporal analysis of these sources can provide an understanding of how information spreads and evolves. Modeling temporal dynamics while considering multiple sources is a challenging research problem. In this paper we address the problem of modeling text streams from two news sources - Twitter and Yahoo! News. Our analysis addresses both their individual properties (including temporal dynamics) and their inter-relationships. This work extends standard topic models by allowing each text stream to have both local topics and shared topics. For temporal modeling we associate each topic with a time-dependent function that characterizes its popularity over time. By integrating the two models, we effectively model the temporal dynamics of multiple correlated text streams in a unified framework. We evaluate our model on a large-scale dataset, consisting of text streams from both Twitter and news feeds from Yahoo! News. Besides overcoming the limitations of existing models, we show that our work achieves better perplexity on unseen data and identifies more coherent topics. We also provide analysis of finding real-world events from the topics obtained by our model.",
"title": ""
},
{
"docid": "4cfe999fa7b2594327b6109084f0164f",
"text": "A large number of post-transcriptional modifications of transfer RNAs (tRNAs) have been described in prokaryotes and eukaryotes. They are known to influence their stability, turnover, and chemical/physical properties. A specific subset of tRNAs contains a thiolated uridine residue at the wobble position to improve the codon-anticodon interaction and translational accuracy. The proteins involved in tRNA thiolation are reminiscent of prokaryotic sulfur transfer reactions and of the ubiquitylation process in eukaryotes. In plants, some of the proteins involved in this process have been identified and show a high degree of homology to their non-plant equivalents. For other proteins, the identification of the plant homologs is much less clear, due to the low conservation in protein sequence. This manuscript describes the identification of CTU2, the second CYTOPLASMIC THIOURIDYLASE protein of Arabidopsis thaliana. CTU2 is essential for tRNA thiolation and interacts with ROL5, the previously identified CTU1 homolog of Arabidopsis. CTU2 is ubiquitously expressed, yet its activity seems to be particularly important in root tissue. A ctu2 knock-out mutant shows an alteration in root development. The analysis of CTU2 adds a new component to the so far characterized protein network involved in tRNA thiolation in Arabidopsis. CTU2 is essential for tRNA thiolation as a ctu2 mutant fails to perform this tRNA modification. The identified Arabidopsis CTU2 is the first CTU2-type protein from plants to be experimentally verified, which is important considering the limited conservation of these proteins between plant and non-plant species. Based on the Arabidopsis protein sequence, CTU2-type proteins of other plant species can now be readily identified.",
"title": ""
},
{
"docid": "ef96b4d9cac097af65fdfbb61d0fc847",
"text": "Altering image’s color is one of the most common tasks in image processing. However, most of existing methods are aimed to perform global color transfer. This usually means that the whole image is affected. But in many cases colors of only a part of an image needs changing, so it is important that the rest of the image remains unmodified. In this article we offer a fast and simple interactive algorithm based on local color statistics that allows altering color of only a part of an image, preserving image’s details and natural look.",
"title": ""
},
{
"docid": "28c0afcde94ba0fcf39678cba0b5999a",
"text": "To describe the aponeurotic expansion of the supraspinatus tendon with anatomic correlations and determine its prevalence in a series of patients imaged with MRI. In the first part of this HIPAA-compliant and IRB-approved study, we retrospectively reviewed 150 consecutive MRI studies of the shoulder obtained on a 1.5-T system. The aponeurotic expansion at the level of the bicipital groove was classified as: not visualized (type 0), flat-shaped (type 1), oval-shaped and less than 50 % the size of the adjacent long head of the biceps section (type 2A), or oval-shaped and more than 50 % the size of the adjacent long head of the biceps section (type 2B). In the second part of this study, we examined both shoulders of 25 cadavers with ultrasound. When aponeurotic expansion was seen at US, a dissection was performed to characterize its origin and termination. An aponeurotic expansion of the supraspinatus located anterior and lateral to the long head of the biceps in its groove was clearly demonstrated in 49 % of the shoulders with MRI. According to our classification, its shape was type 1 in 35 %, type 2A in 10 % and type 2B in 4 %. This structure was also identified in 28 of 50 cadaveric shoulders with ultrasound and confirmed at dissection in 10 cadavers (20 shoulders). This structure originated from the most anterior and superficial aspect of the supraspinatus tendon and inserted distally on the pectoralis major tendon. The aponeurotic expansion of the supraspinatus tendon can be identified with MRI or ultrasound in about half of the shoulders. It courses anteriorly and laterally to the long head of the biceps tendon, outside its synovial sheath.",
"title": ""
},
{
"docid": "2ffc4bb9de1fe6759b6c1d441c4d8854",
"text": "One of the long-standing tasks in computer vision is to use a single 2-D view of an object in order to produce its 3-D shape. Recovering the lost dimension in this process has been the goal of classic shape-from-X methods, but often the assumptions made in those works are quite limiting to be useful for general 3-D objects. This problem has been recently addressed with deep learning methods containing a 2-D (convolution) encoder followed by a 3-D (deconvolution) decoder. These methods have been reasonably successful, but memory and run time constraints impose a strong limitation in terms of the resolution of the reconstructed 3-D shapes. In particular, state-of-the-art methods are able to reconstruct 3-D shapes represented by volumes of at most 323 voxels using state-of-the-art desktop computers. In this work, we present a scalable 2-D single view to 3-D volume reconstruction deep learning method, where the 3-D (deconvolution) decoder is replaced by a simple inverse discrete cosine transform (IDCT) decoder. Our simpler architecture has an order of magnitude faster inference when reconstructing 3-D volumes compared to the convolution-deconvolutional model, an exponentially smaller memory complexity while training and testing, and a sub-linear run-time training complexity with respect to the output volume size. We show on benchmark datasets that our method can produce high-resolution reconstructions with state of the art accuracy.",
"title": ""
},
{
"docid": "dbc11b8d76eb527444ead3b2168aa2c2",
"text": "In this work, we present a novel approach to ontology reasoning that is based on deep learning rather than logic-based formal reasoning. To this end, we introduce a new model for statistical relational learning that is built upon deep recursive neural networks, and give experimental evidence that it can easily compete with, or even outperform, existing logic-based reasoners on the task of ontology reasoning. More precisely, we compared our implemented system with one of the best logic-based ontology reasoners at present, RDFox, on a number of large standard benchmark datasets, and found that our system attained high reasoning quality, while being up to two orders of magnitude faster.",
"title": ""
},
{
"docid": "6ef6cbb60da56bfd53ae945480908d3c",
"text": "OBJECTIVE\nIn multidisciplinary prenatal diagnosis centers, the search for a tetrasomy 12p mosaic is requested following the discovery of a diaphragmatic hernia in the antenatal period. Thus, the series of Pallister Killian syndromes (PKS: OMIM 601803) probably overestimate the prevalence of diaphragmatic hernia in this syndrome to the detriment of other morphological abnormalities.\n\n\nMETHODS\nA multicenter retrospective study was conducted with search for assistance from members of the French society for Fetal Pathology. For each identified case, we collected all antenatal and postnatal data. Antenatal data were compared with data from the clinicopathological examination to assess the adequacy of sonographic signs of PKS. A review of the literature on antenatal morphological anomalies in case of PKS completed the study.\n\n\nRESULTS\nTen cases were referred to us: 7 had cytogenetic confirmation and 6 had ultrasound screening. In the prenatal as well as post mortem period, the most common sign is facial dysmorphism (5 cases/6). A malformation of limbs is reported in half of the cases (3 out of 6). Ultrasound examination detected craniofacial dysmorphism in 5 cases out of 6. We found 1 case of left diaphragmatic hernia. Our results are in agreement with the malformation spectrum described in the literature.\n\n\nCONCLUSION\nSome malformation associations could evoke a SPK without classical diaphragmatic hernia.",
"title": ""
},
{
"docid": "ddae0422527c45e37f9a5b204cb0580f",
"text": "Several studies have reported high efficacy and safety of artemisinin-based combination therapy (ACT) mostly under strict supervision of drug intake and limited to children less than 5 years of age. Patients over 5 years of age are usually not involved in such studies. Thus, the findings do not fully reflect the reality in the field. This study aimed to assess the effectiveness and safety of ACT in routine treatment of uncomplicated malaria among patients of all age groups in Nanoro, Burkina Faso. A randomized open label trial comparing artesunate–amodiaquine (ASAQ) and artemether–lumefantrine (AL) was carried out from September 2010 to October 2012 at two primary health centres (Nanoro and Nazoanga) of Nanoro health district. A total of 680 patients were randomized to receive either ASAQ or AL without any distinction by age. Drug intake was not supervised as pertains in routine practice in the field. Patients or their parents/guardians were advised on the time and mode of administration for the 3 days treatment unobserved at home. Follow-up visits were performed on days 3, 7, 14, 21, and 28 to evaluate clinical and parasitological resolution of their malaria episode as well as adverse events. PCR genotyping of merozoite surface proteins 1 and 2 (msp-1, msp-2) was used to differentiate recrudescence and new infection. By day 28, the PCR corrected adequate clinical and parasitological response was 84.1 and 77.8 % respectively for ASAQ and AL. The cure rate was higher in older patients than in children under 5 years old. The risk of re-infection by day 28 was higher in AL treated patients compared with those receiving ASAQ (p < 0.00001). Both AL and ASAQ treatments were well tolerated. This study shows a lowering of the efficacy when drug intake is not directly supervised. This is worrying as both rates are lower than the critical threshold of 90 % required by the WHO to recommend the use of an anti-malarial drug in a treatment policy. Trial registration: NCT01232530",
"title": ""
},
{
"docid": "82e282703eeed354d2e5dc39992b779c",
"text": "Prediction of natural disasters and their consequences is difficult due to the uncertainties and complexity of multiple related factors. This article explores the use of domain knowledge and spatial data to construct a Bayesian network (BN) that facilitates the integration of multiple factors and quantification of uncertainties within a consistent system for assessment of catastrophic risk. A BN is chosen due to its advantages such as merging multiple source data and domain knowledge in a consistent system, learning from the data set, inference with missing data, and support of decision making. A key advantage of our methodology is the combination of domain knowledge and learning from the data to construct a robust network. To improve the assessment, we employ spatial data analysis and data mining to extend the training data set, select risk factors, and fine-tune the network. Another major advantage of our methodology is the integration of an optimal discretizer, informative feature selector, learners, search strategies for local topologies, and Bayesian model averaging. These techniques all contribute to a robust prediction of risk probability of natural disasters. In the flood disaster's study, our methodology achieved a better probability of detection of high risk, a better precision, and a better ROC area compared with other methods, using both cross-validation and prediction of catastrophic risk based on historic data. Our results suggest that BN is a good alternative for risk assessment and as a decision tool in the management of catastrophic risk.",
"title": ""
},
{
"docid": "6ce860678cbee5db5940cd1bb161525e",
"text": "We propose a novel method for the multi-view reconstruction problem. Surfaces which do not have direct support in the input 3D point cloud and hence need not be photo-consistent but represent real parts of the scene (e.g. low-textured walls, windows, cars) are important for achieving complete reconstructions. We augmented the existing Labatut CGF 2009 method with the ability to cope with these difficult surfaces just by changing the t-edge weights in the construction of surfaces by a minimal s-t cut. Our method uses Visual-Hull to reconstruct the difficult surfaces which are not sampled densely enough by the input 3D point cloud. We demonstrate importance of these surfaces on several real-world data sets. We compare our improvement to our implementation of the Labatut CGF 2009 method and show that our method can considerably better reconstruct difficult surfaces while preserving thin structures and details in the same quality and computational time.",
"title": ""
},
{
"docid": "db98068f4c69b2389c9ff1bc0ade4e6f",
"text": "We infiltrate the ASIC development chain by inserting a small denial-of-service (DoS) hardware Trojan at the fabrication design phase into an existing VLSI circuit, thereby simulating an adversary at a semiconductor foundry. Both the genuine and the altered ASICs have been fabricated using a 180 nm CMOS process. The Trojan circuit adds an overhead of only 0.5% to the original design. In order to detect the hardware Trojan, we perform side-channel analyses and apply IC-fingerprinting techniques using templates, principal component analysis (PCA), and support vector machines (SVMs). As a result, we were able to successfully identify and classify all infected ASICs from non-infected ones. To the best of our knowledge, this is the first hardware Trojan manufactured as an ASIC and has successfully been analyzed using side channels.",
"title": ""
},
{
"docid": "a44ad77cec2b25cb1c42cb0e9e491e39",
"text": "We present a new and novel continuum robot, built from contracting pneumatic muscles. The robot has a continuous compliant backbone, achieved via three independently controlled serially connected three degree of freedom sections, for a total of nine degrees of freedom. We detail the design, construction, and initial testing of the robot. The use of contracting muscles, in contrast to previous comparable designs featuring expanding muscles, is well-suited to use of the robot as an active hook in dynamic manipulation tasks. We describe experiments using the robot in this novel manipulation mode.",
"title": ""
},
{
"docid": "55aea20148423bdb7296addac847d636",
"text": "This paper describes an underwater sensor network with dual communication and support for sensing and mobility. The nodes in the system are connected acoustically for broadcast communication using an acoustic modem we developed. For higher point to point communication speed the nodes are networked optically using custom built optical modems. We describe the hardware details of the underwater sensor node and the communication and networking protocols. Finally, we present and discuss the results from experiments with this system.",
"title": ""
},
{
"docid": "cacef3b17bafadd25cf9a49e826ee066",
"text": "Road accidents are frequent and many cause casualties. Fast handling can minimize the number of deaths from traffic accidents. In addition to victims of traffic accidents, there are also patients who need emergency handling of the disease he suffered. One of the first help that can be given to the victim or patient is to use an ambulance equipped with medical personnel and equipment needed. The availability of ambulance and accurate information about victims and road conditions can help the first aid process for victims or patients. Supportive treatment can be done to deal with patients by determining the best route (nearest and fastest) to the nearest hospital. The best route can be known by utilizing the collaboration between the Dijkstra algorithm and the Floyd-warshall algorithm. This application applies Dijkstra's algorithm to determine the fastest travel time to the nearest hospital. The Floyd-warshall algorithm is implemented to determine the closest distance to the hospital. Data on some nearby hospitals will be collected by the system using Dijkstra's algorithm and then the system will calculate the fastest distance based on the last traffic condition using the Floyd-warshall algorithm to determine the best route to the nearest hospital recommended by the system. This application is built with the aim of providing support for the first handling process to the victim or the emergency patient by giving the ambulance calling report and determining the best route to the nearest hospital.",
"title": ""
},
{
"docid": "0dc9f8f65efd02f16fea77d910fd73c7",
"text": "The visual system is the most studied sensory pathway, which is partly because visual stimuli have rather intuitive properties. There are reasons to think that the underlying principle ruling coding, however, is the same for vision and any other type of sensory signal, namely the code has to satisfy some notion of optimality--understood as minimum redundancy or as maximum transmitted information. Given the huge variability of natural stimuli, it would seem that attaining an optimal code is almost impossible; however, regularities and symmetries in the stimuli can be used to simplify the task: symmetries allow predicting one part of a stimulus from another, that is, they imply a structured type of redundancy. Optimal coding can only be achieved once the intrinsic symmetries of natural scenes are understood and used to the best performance of the neural encoder. In this paper, we review the concepts of optimal coding and discuss the known redundancies and symmetries that visual scenes have. We discuss in depth the only approach which implements the three of them known so far: translational invariance, scale invariance and multiscaling. Not surprisingly, the resulting code possesses features observed in real visual systems in mammals.",
"title": ""
},
{
"docid": "b7d181503afa8bcb36b2428fcf3655bc",
"text": "Since the IEEE 1609/WAVE standards were published, much research has continued on validation and optimization. However, precise simulation models of these standards are lacking recently, especially within the ns-3 network simulator. In this paper, we present the ns-3 implementation details of the IEEE 1609.4 and IEEE 802.11p standards which are key elements of the WAVE MAC layer. Moreover we discuss some implementation issues and describe our solutions. Lastly, we also analyze and evaluate the performance of the WAVE MAC layer with the implemented model. Our simulation results show that multiple channel operation specified in the WAVE standards could impact vehicular wireless communication differently, depending on the different scenarios, and the results should be considered carefully during the development of VANET applications.",
"title": ""
},
{
"docid": "bd6115cbcf62434f38ca4b43480b7c5a",
"text": "Most existing person re-identification methods focus on finding similarities between persons between pairs of cameras (camera pairwise re-identification) without explicitly maintaining consistency of the results across the network. This may lead to infeasible associations when results from different camera pairs are combined. In this paper, we propose a network consistent re-identification (NCR) framework, which is formulated as an optimization problem that not only maintains consistency in re-identification results across the network, but also improves the camera pairwise re-identification performance between all the individual camera pairs. This can be solved as a binary integer programing problem, leading to a globally optimal solution. We also extend the proposed approach to the more general case where all persons may not be present in every camera. Using two benchmark datasets, we validate our approach and compare against state-of-the-art methods.",
"title": ""
},
{
"docid": "e56e6fd8620ab8c76abc73c379d1fdd5",
"text": "Article history: Received 7 August 2015 Received in revised form 26 January 2016 Accepted 1 April 2016 Available online 7 April 2016 The emergence of social commerce has brought substantial changes to both businesses and consumers. Hence, understanding consumer behavior in the context of social commerce has become critical for companies that aim to better influence consumers and harness the power of their social ties. Given that research on this issue is new and largely fragmented, it will be theoretically important to evaluate what has been studied and derive meaningful insights through a structured review of the literature. In this study, we conduct a systematic review of social commerce studies to explicate how consumers behave on social networking sites. We classify these studies, discuss noteworthy theories, and identify important research methods. More importantly, we draw upon the stimulus–organism–response model and the five-stage consumer decision-making process to propose an integrative framework for understanding consumer behavior in this context. We believe that this framework can provide a useful basis for future social commerce research. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
}
] | scidocsrr |
fe5597a76544a776519a5fbf9efe7ebf | Automatic identification of cited text spans: a multi-classifier approach over imbalanced dataset | [
{
"docid": "d38e5fa4adadc3e979c5de812599c78a",
"text": "The convergence properties of a nearest neighbor rule that uses an editing procedure to reduce the number of preclassified samples and to improve the performance of the rule are developed. Editing of the preclassified samples using the three-nearest neighbor rule followed by classification using the single-nearest neighbor rule with the remaining preclassified samples appears to produce a decision procedure whose risk approaches the Bayes' risk quite closely in many problems with only a few preclassified samples. The asymptotic risk of the nearest neighbor rules and the nearest neighbor rules using edited preclassified samples is calculated for several problems.",
"title": ""
},
{
"docid": "01055f9b1195cd7d03b404f3d530bb55",
"text": "In recent years there has been an increasing interest in approaches to scientific summarization that take advantage of the citations a research paper has received in order to extract its main contributions. In this context, the CL-SciSumm 2017 Shared Task has been proposed to address citation-based information extraction and summarization. In this paper we present several systems to address three of the CL-SciSumm tasks. Notably, unsupervised systems to match citing and cited sentences (Task 1A), a supervised approach to identify the type of information being cited (Task 1B), and a supervised citation-based summarizer (Task 2).",
"title": ""
},
{
"docid": "a13a50d552572d08b4d1496ca87ac160",
"text": "In recent years, mining with imbalanced data sets receives more and more attentions in both theoretical and practical aspects. This paper introduces the importance of imbalanced data sets and their broad application domains in data mining, and then summarizes the evaluation metrics and the existing methods to evaluate and solve the imbalance problem. Synthetic minority oversampling technique (SMOTE) is one of the over-sampling methods addressing this problem. Based on SMOTE method, this paper presents two new minority over-sampling methods, borderline-SMOTE1 and borderline-SMOTE2, in which only the minority examples near the borderline are over-sampled. For the minority class, experiments show that our approaches achieve better TP rate and F-value than SMOTE and random over-sampling methods.",
"title": ""
}
] | [
{
"docid": "ffc239273a5e911dcc59559ef7c2c7f8",
"text": "Human-dominated marine ecosystems are experiencing accelerating loss of populations and species, with largely unknown consequences. We analyzed local experiments, long-term regional time series, and global fisheries data to test how biodiversity loss affects marine ecosystem services across temporal and spatial scales. Overall, rates of resource collapse increased and recovery potential, stability, and water quality decreased exponentially with declining diversity. Restoration of biodiversity, in contrast, increased productivity fourfold and decreased variability by 21%, on average. We conclude that marine biodiversity loss is increasingly impairing the ocean's capacity to provide food, maintain water quality, and recover from perturbations. Yet available data suggest that at this point, these trends are still reversible.",
"title": ""
},
{
"docid": "8cc9ab356aa8b0f88d244b2077816ddc",
"text": "Brain control of prehension is thought to rely on two specific brain circuits: a dorsomedial one (involving the areas of the superior parietal lobule and the dorsal premotor cortex) involved in the transport of the hand toward the object and a dorsolateral one (involving the inferior parietal lobule and the ventral premotor cortex) dealing with the preshaping of the hand according to the features of the object. The present study aimed at testing whether a pivotal component of the dorsomedial pathway (area V6A) is involved also in hand preshaping and grip formation to grasp objects of different shapes. Two macaque monkeys were trained to reach and grasp different objects. For each object, animals used a different grip: whole-hand prehension, finger prehension, hook grip, primitive precision grip, and advanced precision grip. Almost half of 235 neurons recorded from V6A displayed selectivity for a grip or a group of grips. Several experimental controls were used to ensure that neural modulation was attributable to grip only. These findings, in concert with previous studies demonstrating that V6A neurons are modulated by reach direction and wrist orientation, that lesion of V6A evokes reaching and grasping deficits, and that dorsal premotor cortex contains both reaching and grasping neurons, indicate that the dorsomedial parieto-frontal circuit may play a central role in all phases of reach-to-grasp action. Our data suggest new directions for the modeling of prehension movements and testable predictions for new brain imaging and neuropsychological experiments.",
"title": ""
},
{
"docid": "2be085910cbfd243ba85eba0a6521779",
"text": "BACKGROUND\nSuspension sutures are commonly used in numerous cosmetic surgical procedures. Several authors have described the use of such sutures as a part of classical rhinoplasty. On the other hand, it is not uncommon to see patients seeking nasal surgery for only a minimal hump deformity combined with an underrotated, underprojecting tip, which does not necessarily require all components of rhinoplasty. With the benefit of the suture suspension technique described here, such simple tip deformities can be reshaped percutaneously via minimal incisions.\n\n\nOBJECTIVE\nIn this study, the author describes an original technique based on the philosophy of vertical suspension lifts, achieving the suspension of the nasal tip with a percutaneous purse-string suture applied through small access punctures.\n\n\nPATIENTS AND METHODS\nBetween December 2005 and December 2008, 86 patients were selected to undergo rhinoplasty using the author's shuttle lifting technique. The procedure was performed with a double-sided needle or shuttle, smoothly anchoring the lower lateral cartilages in a vertical direction to the glabellar periosteum, excluding the skin envelope.\n\n\nRESULTS\nMean follow-up was 13 months, with a range of eight to 24 months. Outcomes were satisfactory in all but 12 cases, of which seven found the result inadequate; two of those patients underwent a definitive rhinoplasty operation. Five patients requested that the suture be detached because of an overexaggerated appearance. Operative time was less than 15 minutes in all patients, with an uneventful rapid recovery.\n\n\nCONCLUSIONS\nAs a minimally invasive nasal reshaping procedure, shuttle lifting is a good choice to achieve long-lasting, satisfactory results in selected patients with minimal hump deformity and an underrotated tip. The significance of this technique lies in the fact that it is one of very few office-based minimally invasive alternatives for aesthetic nasal surgery, with a recovery period of two to three days.",
"title": ""
},
{
"docid": "8f601e751650b56be81b069c42089640",
"text": "Inspired by the success of self attention mechanism and Transformer architecture in sequence transduction and image generation applications, we propose novel self attention-based architectures to improve the performance of adversarial latent codebased schemes in text generation. Adversarial latent code-based text generation has recently gained a lot of attention due to its promising results. In this paper, we take a step to fortify the architectures used in these setups, specifically AAE and ARAE. We benchmark two latent code-based methods (AAE and ARAE) designed based on adversarial setups. In our experiments, the Google sentence compression dataset is utilized to compare our method with these methods using various objective and subjective measures. The experiments demonstrate the proposed (self) attention-based models outperform the state-of-the-art in adversarial code-based text generation.",
"title": ""
},
{
"docid": "41c317b0e275592ea9009f3035d11a64",
"text": "We introduce a distribution based model to learn bilingual word embeddings from monolingual data. It is simple, effective and does not require any parallel data or any seed lexicon. We take advantage of the fact that word embeddings are usually in form of dense real-valued lowdimensional vector and therefore the distribution of them can be accurately estimated. A novel cross-lingual learning objective is proposed which directly matches the distributions of word embeddings in one language with that in the other language. During the joint learning process, we dynamically estimate the distributions of word embeddings in two languages respectively and minimize the dissimilarity between them through standard back propagation algorithm. Our learned bilingual word embeddings allow to group each word and its translations together in the shared vector space. We demonstrate the utility of the learned embeddings on the task of finding word-to-word translations from monolingual corpora. Our model achieved encouraging performance on data in both related languages and substantially different languages.",
"title": ""
},
{
"docid": "9698bfe078a32244169cbe50a04ebb00",
"text": "Maximum power point tracking (MPPT) controllers play an important role in photovoltaic systems. They maximize the output power of a PV array for a given set of conditions. This paper presents an overview of the different MPPT techniques. Each technique is evaluated on its ability to detect multiple maxima, convergence speed, ease of implementation, efficiency over a wide output power range, and cost of implementation. The perturbation and observation (P & O), and incremental conductance (IC) algorithms are widely used techniques, with many variants and optimization techniques reported. For this reason, this paper evaluates the performance of these two common approaches from a dynamic and steady state perspective.",
"title": ""
},
{
"docid": "8e9deb174bedff0a5b03e4286172cd36",
"text": "An ethnographic approach to the study of caregiver-assisted music events was employed with patients suffering from dementia or suspected dementia. The aim of this study was to illuminate the importance of music events and the reactions and social interactions of patients with dementia or suspected dementia and their caregivers before, during and after such events, including the remainder of the day. The results showed that the patients experienced an ability to sing, play instruments, perform body movements, and make puns during such music events. While singing familiar songs, some patients experienced the return of distant memories, which they seemed to find very pleasurable. During and after the music events, the personnel experienced bonding with the patients, who seemed easier to care for. Caregiver-assisted music events show a great potential for use in dementia care.",
"title": ""
},
{
"docid": "4e263764fd14f643f7b414bc12615565",
"text": "We present a superpixel method for full spatial phase and amplitude control of a light beam using a digital micromirror device (DMD) combined with a spatial filter. We combine square regions of nearby micromirrors into superpixels by low pass filtering in a Fourier plane of the DMD. At each superpixel we are able to independently modulate the phase and the amplitude of light, while retaining a high resolution and the very high speed of a DMD. The method achieves a measured fidelity F = 0.98 for a target field with fully independent phase and amplitude at a resolution of 8 × 8 pixels per diffraction limited spot. For the LG10 orbital angular momentum mode the calculated fidelity is F = 0.99993, using 768 × 768 DMD pixels. The superpixel method reduces the errors when compared to the state of the art Lee holography method for these test fields by 50% and 18%, with a comparable light efficiency of around 5%. Our control software is publicly available.",
"title": ""
},
{
"docid": "c2fee2767395b1e9d6490956c7a23268",
"text": "In this paper, we elaborate the advantages of combining two neural network methodologies, convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent neural networks, with the framework of hybrid hidden Markov models (HMM) for recognizing offline handwriting text. CNNs employ shift-invariant filters to generate discriminative features within neural networks. We show that CNNs are powerful tools to extract general purpose features that even work well for unknown classes. We evaluate our system on a Chinese handwritten text database and provide a GPU-based implementation that can be used to reproduce the experiments. All experiments were conducted with RWTH OCR, an open-source system developed at our institute.",
"title": ""
},
{
"docid": "457f2508c59daaae9af818f8a6a963d1",
"text": "Robotic systems hold great promise to assist with household, educational, and research tasks, but the difficulties of designing and building such robots often are an inhibitive barrier preventing their development. This paper presents a framework in which simple robots can be easily designed and then rapidly fabricated and tested, paving the way for greater proliferation of robot designs. The Python package presented in this work allows for the scripted generation of mechanical elements, using the principles of hierarchical structure and modular reuse to simplify the design process. These structures are then manufactured using an origami-inspired method in which precision cut sheets of plastic film are folded to achieve desired geometries. Using these processes, lightweight, low cost, rapidly built quadrotors were designed and fabricated. Flight tests compared the resulting robots against similar micro air vehicles (MAVs) generated using other processes. Despite lower tolerance and precision, robots generated using the process presented in this work took significantly less time and cost to design and build, and yielded lighter, lower power MAVs.",
"title": ""
},
{
"docid": "fbd05f764470b94af30c7799e94ff0f0",
"text": "Agent-based modeling of human social behavior is an increasingly important research area. A key factor in human social interaction is our beliefs about others, a theory of mind. Whether we believe a message depends not only on its content but also on our model of the communicator. How we act depends not only on the immediate effect but also on how we believe others will react. In this paper, we discuss PsychSim, an implemented multiagent-based simulation tool for modeling interactions and influence. While typical approaches to such modeling have used first-order logic, PsychSim agents have their own decision-theoretic model of the world, including beliefs about its environment and recursive models of other agents. Using these quantitative models of uncertainty and preferences, we have translated existing psychological theories into a decision-theoretic semantics that allow the agents to reason about degrees of believability in a novel way. We discuss PsychSim’s underlying architecture and describe its application to a school violence scenario for illustration.",
"title": ""
},
{
"docid": "5eb9e759ec8fc9ad63024130f753d136",
"text": "A 3-10 GHz broadband CMOS T/R switch for ultra-wideband (UWB) transceiver is presented. The broadband CMOS T/R switch is fabricated based on the 0.18 mu 1P6M standard CMOS process. On-chip measurement of the CMOS T/R switch is performed. The insertion loss of the proposed CMOS T/R Switch is about 3.1plusmn1.3dB. The return losses at both input and output terminals are higher than 14 dB. It is also characterized with 25-34dB isolation and 18-20 dBm input P1dB. The broadband CMOS T/R switch shows highly linear phase and group delay of 20plusmn10 ps from 10MHz to 15GHz. It can be easily integrated with other CMOS RFICs to form on-chip transceivers for various UWB applications",
"title": ""
},
{
"docid": "51b766b0a7f1e3bc1f49d16df04a69f7",
"text": "This study reports the results of a biometrical genetical analysis of scores on a personality inventory (The Eysenck Personality Questionnaire, or EPQ), which purports to measure psychoticism, neuroticism, extraversion and dissimulation (Lie Scale). The subjects were 544 pairs of twins, from the Maudsley Twin Register. The purpose of the study was to test the applicability of various genotypeenvironmental models concerning the causation of P scores. Transformation of the raw scores is required to secure a scale on which the effects of genes and environment are additive. On such a scale 51% of the variation in P is due to environmental differences within families, but the greater part (77%) of this environmental variation is due to random effects which are unlikely to be controllable. . The genetical consequences ot'assortative mating were too slight to be detectable in this study, and the genetical variation is consistent with the hypothesis that gene effects are additive. This is a general finding for traits which have been subjected to stabilizing selection. Our model for P is consistent with these advanced elsewhere to explain the origin of certain kinds of psychopathology. The data provide little support for the view that the \"family environment\" (including the environmental influence of parents) plays a major part in the determination of individual differences in P, though we cite evidence suggesting that sibling competition effects are producing genotypeenvironmental covariation for the determinants of P in males. The genetical and environmental determinants of the covariation of P with other personality dimensions are considered. Assumptions are discussed and tested where possible.",
"title": ""
},
{
"docid": "0eed7e3a9128b10f8c4711592b9628ee",
"text": "Visual defects, called mura in the field, sometimes occur during the manufacturing of the flat panel liquid crystal displays. In this paper we propose an automatic inspection method that reliably detects and quantifies TFT-LCD regionmura defects. The method consists of two phases. In the first phase we segment candidate region-muras from TFT-LCD panel images using the modified regression diagnostics and Niblack’s thresholding. In the second phase, based on the human eye’s sensitivity to mura, we quantify mura level for each candidate, which is used to identify real muras by grading them as pass or fail. Performance of the proposed method is evaluated on real TFT-LCD panel samples. key words: Machine vision, image segmentation, regression diagnostics, industrial inspection, visual perception.",
"title": ""
},
{
"docid": "55aea20148423bdb7296addac847d636",
"text": "This paper describes an underwater sensor network with dual communication and support for sensing and mobility. The nodes in the system are connected acoustically for broadcast communication using an acoustic modem we developed. For higher point to point communication speed the nodes are networked optically using custom built optical modems. We describe the hardware details of the underwater sensor node and the communication and networking protocols. Finally, we present and discuss the results from experiments with this system.",
"title": ""
},
{
"docid": "daecaa40531dad2622d83aca90ff7185",
"text": "Advances in tourism economics have enabled us to collect massive amounts of travel tour data. If properly analyzed, this data could be a source of rich intelligence for providing real-time decision making and for the provision of travel tour recommendations. However, tour recommendation is quite different from traditional recommendations, because the tourist’s choice is affected directly by the travel costs, which includes both financial and time costs. To that end, in this article, we provide a focused study of cost-aware tour recommendation. Along this line, we first propose two ways to represent user cost preference. One way is to represent user cost preference by a two-dimensional vector. Another way is to consider the uncertainty about the cost that a user can afford and introduce a Gaussian prior to model user cost preference. With these two ways of representing user cost preference, we develop different cost-aware latent factor models by incorporating the cost information into the probabilistic matrix factorization (PMF) model, the logistic probabilistic matrix factorization (LPMF) model, and the maximum margin matrix factorization (MMMF) model, respectively. When applied to real-world travel tour data, all the cost-aware recommendation models consistently outperform existing latent factor models with a significant margin.",
"title": ""
},
{
"docid": "e81b4c01c2512f2052354402cd09522b",
"text": "...................................................................................................................... iii ACKNOWLEDGEMENTS .................................................................................................v CHAPTER",
"title": ""
},
{
"docid": "a9fa30e95bf31ea2061a66f5b4aaf210",
"text": "In the context of current concerns about replication in psychological science, we describe 10 findings from behavioral genetic research that have replicated robustly. These are \"big\" findings, both in terms of effect size and potential impact on psychological science, such as linearly increasing heritability of intelligence from infancy (20%) through adulthood (60%). Four of our top 10 findings involve the environment, discoveries that could have been found only with genetically sensitive research designs. We also consider reasons specific to behavioral genetics that might explain why these findings replicate.",
"title": ""
},
{
"docid": "4dc6f5768b43e6c491f0b08600acbea5",
"text": "Stochastic Dual Coordinate Ascent is a popular method for solving regularized loss minimization for the case of convex losses. We describe variants of SDCA that do not require explicit regularization and do not rely on duality. We prove linear convergence rates even if individual loss functions are non-convex, as long as the expected loss is strongly convex.",
"title": ""
}
] | scidocsrr |
b0975ac88cbc489dac8ff98ae7401dfe | Active learning for regression using greedy sampling | [
{
"docid": "ef444570c043be67453317e26600972f",
"text": "In multiple regression it is shown that parameter estimates based on minimum residual sum of squares have a high probability of being unsatisfactory, if not incorrect, if the prediction vectors are not orthogonal. Proposed is an estimation procedure based on adding small positive quantities to the diagonal of X’X. Introduced is the ridge trace, a method for showing in two dimensions the effects of nonorthogonality. It is then shown how to augment X’X to obtain biased estimates with smaller mean square error.",
"title": ""
}
] | [
{
"docid": "b763ab2702a32f82b75af938cb352317",
"text": "The idea that memory is stored in the brain as physical alterations goes back at least as far as Plato, but further conceptualization of this idea had to wait until the 20(th) century when two guiding theories were presented: the \"engram theory\" of Richard Semon and Donald Hebb's \"synaptic plasticity theory.\" While a large number of studies have been conducted since, each supporting some aspect of each of these theories, until recently integrative evidence for the existence of engram cells and circuits as defined by the theories was lacking. In the past few years, the combination of transgenics, optogenetics, and other technologies has allowed neuroscientists to begin identifying memory engram cells by detecting specific populations of cells activated during specific learning epochs and by engineering them not only to evoke recall of the original memory, but also to alter the content of the memory.",
"title": ""
},
{
"docid": "e3218926a5a32d2c44d5aea3171085e2",
"text": "The present study sought to determine the effects of Mindful Sport Performance Enhancement (MSPE) on runners. Participants were 25 recreational long-distance runners openly assigned to either the 4-week intervention or to a waiting-list control group, which later received the same program. Results indicate that the MSPE group showed significantly more improvement in organizational demands (an aspect of perfectionism) compared with controls. Analyses of preto postworkshop change found a significant increase in state mindfulness and trait awareness and decreases in sport-related worries, personal standards perfectionism, and parental criticism. No improvements in actual running performance were found. Regression analyses revealed that higher ratings of expectations and credibility of the workshop were associated with lower postworkshop perfectionism, more years running predicted higher ratings of perfectionism, and more life stressors predicted lower levels of worry. Findings suggest that MSPE may be a useful mental training intervention for improving mindfulness, sport-anxiety related worry, and aspects of perfectionism in long-distance runners.",
"title": ""
},
{
"docid": "d67dec88b60988b385befb5653abef2b",
"text": "With the growing importance of networked embedded devices in the upcoming Internet of Things, new attacks targeting embedded OSes are emerging. ARM processors, which power over 60% of embedded devices, introduce a hardware security extension called TrustZone to protect secure applications in an isolated secure world that cannot be manipulated by a compromised OS in the normal world. Leveraging TrustZone technology, a number of memory integrity checking schemes have been proposed in the secure world to introspect malicious memory modification of the normal world. In this paper, we first discover and verify an ARM TrustZone cache incoherence behavior, which results in the cache contents of the two worlds, secure and non-secure, potentially being different even when they are mapped to the same physical address. Furthermore, code in one TrustZone world cannot access the cache content in the other world. Based on this observation, we develop a new rootkit called CacheKit that hides in the cache of the normal world and is able to evade memory introspection from the secure world. We implement a CacheKit prototype on Cortex-A8 processors after solving a number of challenges. First, we employ the Cache-as-RAM technique to ensure that the malicious code is only loaded into the CPU cache and not RAM. Thus, the secure world cannot detect the existence of the malicious code by examining the RAM. Second, we use the ARM processor's hardware support on cache settings to keep the malicious code persistent in the cache. Third, to evade introspection that flushes cache content back into RAM, we utilize physical addresses from the I/O address range that is not backed by any real I/O devices or RAM. The experimental results show that CacheKit can successfully evade memory introspection from the secure world and has small performance impacts on the rich OS. We discuss potential countermeasures to detect this type of rootkit attack.",
"title": ""
},
{
"docid": "3ff13bb873dd9a8deada0a7837c5eca4",
"text": "This work investigates the use of deep fully convolutional neural networks (DFCNN) for pixel-wise scene labeling of Earth Observation images. Especially, we train a variant of the SegNet architecture on remote sensing data over an urban area and study different strategies for performing accurate semantic segmentation. Our contributions are the following: 1) we transfer efficiently a DFCNN from generic everyday images to remote sensing images; 2) we introduce a multi-kernel convolutional layer for fast aggregation of predictions at multiple scales; 3) we perform data fusion from heterogeneous sensors (optical and laser) using residual correction. Our framework improves state-of-the-art accuracy on the ISPRS Vaihingen 2D Semantic Labeling dataset.",
"title": ""
},
{
"docid": "0277fd19009088f84ce9f94a7e942bc1",
"text": "These study it is necessary to can be used as a theoretical foundation upon which to base decision-making and strategic thinking about e-learning system. This paper proposes a new framework for assessing readiness of an organization to implement the e-learning system project on the basis of McKinsey 7S model using fuzzy logic analysis. The study considers 7 dimensions as approach to assessing the current situation of the organization prior to system implementation to identify weakness areas which may encounter the project with failure. Adopted was focus on Questionnaires and group interviews to specific data collection from three colleges in Mosul University in Iraq. This can be achieved success in building an e-learning system at the University of Mosul by readiness assessment according to the model of multidimensional based on the framework of 7S is selected by 23 factors, and thus can avoid failures or weaknesses facing the implementation process before the start of the project and a step towards enabling the administration to make decisions that achieve success in this area, as well as to avoid the high cost associated with the implementation process.",
"title": ""
},
{
"docid": "16fa2f02d0709c130cc35fce61793ae1",
"text": "Evaluating similarity between graphs is of major importance in several computer vision and pattern recognition problems, where graph representations are often used to model objects or interactions between elements. The choice of a distance or similarity metric is, however, not trivial and can be highly dependent on the application at hand. In this work, we propose a novel metric learning method to evaluate distance between graphs that leverages the power of convolutional neural networks, while exploiting concepts from spectral graph theory to allow these operations on irregular graphs. We demonstrate the potential of our method in the field of connectomics, where neuronal pathways or functional connections between brain regions are commonly modelled as graphs. In this problem, the definition of an appropriate graph similarity function is critical to unveil patterns of disruptions associated with certain brain disorders. Experimental results on the ABIDE dataset show that our method can learn a graph similarity metric tailored for a clinical application, improving the performance of a simple k-nn classifier by 11.9% compared to a traditional distance metric.",
"title": ""
},
{
"docid": "d6a6cadd782762e4591447b7dd2c870a",
"text": "OBJECTIVE\nThe objective of this study was to assess the effects of participation in a mindfulness meditation-based stress reduction program on mood disturbance and symptoms of stress in cancer outpatients.\n\n\nMETHODS\nA randomized, wait-list controlled design was used. A convenience sample of eligible cancer patients enrolled after giving informed consent and were randomly assigned to either an immediate treatment condition or a wait-list control condition. Patients completed the Profile of Mood States and the Symptoms of Stress Inventory both before and after the intervention. The intervention consisted of a weekly meditation group lasting 1.5 hours for 7 weeks plus home meditation practice.\n\n\nRESULTS\nNinety patients (mean age, 51 years) completed the study. The group was heterogeneous in type and stage of cancer. Patients' mean preintervention scores on dependent measures were equivalent between groups. After the intervention, patients in the treatment group had significantly lower scores on Total Mood Disturbance and subscales of Depression, Anxiety, Anger, and Confusion and more Vigor than control subjects. The treatment group also had fewer overall Symptoms of Stress; fewer Cardiopulmonary and Gastrointestinal symptoms; less Emotional Irritability, Depression, and Cognitive Disorganization; and fewer Habitual Patterns of stress. Overall reduction in Total Mood Disturbance was 65%, with a 31% reduction in Symptoms of Stress.\n\n\nCONCLUSIONS\nThis program was effective in decreasing mood disturbance and stress symptoms in both male and female patients with a wide variety of cancer diagnoses, stages of illness, and ages. cancer, stress, mood, intervention, mindfulness.",
"title": ""
},
{
"docid": "f3c6b42ed65b38708b12d46c48af4f0b",
"text": "Data are often labeled by many different experts with each expert only labeling a small fraction of the data and each data point being labeled by several experts. This reduces the workload on individual experts and also gives a better estimate of the unobserved ground truth. When experts disagree, the standard approaches are to treat the majority opinion as the correct label and to model the correct label as a distribution. These approaches, however, do not make any use of potentially valuable information about which expert produced which label. To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in samplespecific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. Here we show that our approach leads to improvements in computeraided diagnosis of diabetic retinopathy. We also show that our method performs better than competing algorithms by Welinder and Perona (2010); Mnih and Hinton (2012). Our work offers an innovative approach for dealing with the myriad real-world settings that use expert opinions to define labels",
"title": ""
},
{
"docid": "b00ce7fc3de34fcc31ada0f66042ef5e",
"text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this secure broadcast communication in wired and wireless networks by reading this site. We offer you the best product, always and always.",
"title": ""
},
{
"docid": "cc56bbfe498556acb317fd325d750cf9",
"text": "The goal of the current work is to evaluate semantic feature aggregation techniques in a task of gender classification of public social media texts in Russian. We collect Facebook posts of Russian-speaking users and apply them as a dataset for two topic modelling techniques and a distributional clustering approach. The output of the algorithms is applied as a feature aggregation method in a task of gender classification based on a smaller Facebook sample. The classification performance of the best model is favorably compared against the lemmas baseline and the state-of-the-art results reported for a different genre or language. The resulting successful features are exemplified, and the difference between the three techniques in terms of classification performance and feature contents are discussed, with the best technique clearly outperforming the others.",
"title": ""
},
{
"docid": "26b992f705ef29460c0b459d75a115a8",
"text": "Supply chain management creates value for companies, customers and stakeholders interacting throughout a supply chain. The strategic dimension of supply chains makes it paramount that their performances are measured. In today’s performance evaluation processes, companies tend to refer to several models that will differ in terms of corporate organization, the distribution of responsibilities and supply chain maturity. The present article analyzes various models used to assess supply chains by highlighting their specific characteristics and applicability in different contexts. It also offers an analytical grid breaking these models down into seven layers. This grid will help managers evolve towards a model that is more suitable for their needs.",
"title": ""
},
{
"docid": "56a72aaff0c955b79449035f2cccabbc",
"text": "This work aims to identify the main aspects of Web design responsible for eliciting specific emotions. For this purpose, we performed a user study with 40 participants testing a Web application designed by applying a set of criteria for stimulating various emotions. In particular, we considered six emotions (hate, anxiety, boredom, fun, serenity, love), and for each of them a specific set of design criteria was exploited. The purpose of the study was to reach a better understanding regarding what design techniques are most important to stimulate each emotion. We report on the results obtained and discuss their implications. Such results can inform the development of guidelines for Web applications able to stimulate users’ emotions.",
"title": ""
},
{
"docid": "01034189c9a4aa11bdff074e7470b3f8",
"text": "We introducea methodfor predictinga controlsignalfrom anotherrelatedsignal,and applyit to voice puppetry: Generatingfull facialanimationfrom expressi ve information in anaudiotrack. Thevoicepuppetlearnsa facialcontrolmodelfrom computervision of realfacialbehavior, automaticallyincorporatingvocalandfacialdynamicssuchascoarticulation. Animation is producedby usingaudioto drive themodel,which induces a probability distribution over the manifold of possiblefacial motions. We presenta linear-time closed-formsolution for the most probabletrajectoryover this manifold. The outputis a seriesof facial control parameters, suitablefor driving many different kindsof animationrangingfrom video-realisticimagewarpsto 3D cartooncharacters. This work may not be copiedor reproducedin whole or in part for any commercialpurpose.Permissionto copy in whole or in part without paymentof fee is grantedfor nonprofiteducationaland researchpurposesprovided that all suchwhole or partial copiesincludethe following: a noticethat suchcopying is by permissionof Mitsubishi Electric InformationTechnologyCenterAmerica;an acknowledgmentof the authorsandindividual contributionsto the work; andall applicableportionsof the copyright notice. Copying, reproduction,or republishingfor any otherpurposeshall requirea licensewith paymentof feeto MitsubishiElectricInformationTechnologyCenterAmerica.All rightsreserved. Copyright c MitsubishiElectricInformationTechnologyCenterAmerica,1999 201Broadway, Cambridge,Massachusetts 02139 Publication History:– 1. 9sep98first circulated. 2. 7jan99submittedto SIGGRAPH’99",
"title": ""
},
{
"docid": "72b080856124d39b62d531cb52337ce9",
"text": "Experimental and clinical studies have identified a crucial role of microcirculation impairment in severe infections. We hypothesized that mottling, a sign of microcirculation alterations, was correlated to survival during septic shock. We conducted a prospective observational study in a tertiary teaching hospital. All consecutive patients with septic shock were included during a 7-month period. After initial resuscitation, we recorded hemodynamic parameters and analyzed their predictive value on mortality. The mottling score (from 0 to 5), based on mottling area extension from the knees to the periphery, was very reproducible, with an excellent agreement between independent observers [kappa = 0.87, 95% CI (0.72–0.97)]. Sixty patients were included. The SOFA score was 11.5 (8.5–14.5), SAPS II was 59 (45–71) and the 14-day mortality rate 45% [95% CI (33–58)]. Six hours after inclusion, oliguria [OR 10.8 95% CI (2.9, 52.8), p = 0.001], arterial lactate level [<1.5 OR 1; between 1.5 and 3 OR 3.8 (0.7–29.5); >3 OR 9.6 (2.1–70.6), p = 0.01] and mottling score [score 0–1 OR 1; score 2–3 OR 16, 95% CI (4–81); score 4–5 OR 74, 95% CI (11–1,568), p < 0.0001] were strongly associated with 14-day mortality, whereas the mean arterial pressure, central venous pressure and cardiac index were not. The higher the mottling score was, the earlier death occurred (p < 0.0001). Patients whose mottling score decreased during the resuscitation period had a better prognosis (14-day mortality 77 vs. 12%, p = 0.0005). The mottling score is reproducible and easy to evaluate at the bedside. The mottling score as well as its variation during resuscitation is a strong predictor of 14-day survival in patients with septic shock.",
"title": ""
},
{
"docid": "6cb2e41787378eca0dbbc892f46274e5",
"text": "Both reviews and user-item interactions (i.e., rating scores) have been widely adopted for user rating prediction. However, these existing techniques mainly extract the latent representations for users and items in an independent and static manner. That is, a single static feature vector is derived to encode user preference without considering the particular characteristics of each candidate item. We argue that this static encoding scheme is incapable of fully capturing users’ preferences, because users usually exhibit different preferences when interacting with different items. In this article, we propose a novel context-aware user-item representation learning model for rating prediction, named CARL. CARL derives a joint representation for a given user-item pair based on their individual latent features and latent feature interactions. Then, CARL adopts Factorization Machines to further model higher order feature interactions on the basis of the user-item pair for rating prediction. Specifically, two separate learning components are devised in CARL to exploit review data and interaction data, respectively: review-based feature learning and interaction-based feature learning. In the review-based learning component, with convolution operations and attention mechanism, the pair-based relevant features for the given user-item pair are extracted by jointly considering their corresponding reviews. However, these features are only reivew-driven and may not be comprehensive. Hence, an interaction-based learning component further extracts complementary features from interaction data alone, also on the basis of user-item pairs. The final rating score is then derived with a dynamic linear fusion mechanism. Experiments on seven real-world datasets show that CARL achieves significantly better rating prediction accuracy than existing state-of-the-art alternatives. Also, with the attention mechanism, we show that the pair-based relevant information (i.e., context-aware information) in reviews can be highlighted to interpret the rating prediction for different user-item pairs.",
"title": ""
},
{
"docid": "c206399c6ebf96f3de3aa5fdb10db49d",
"text": "Canine monocytotropic ehrlichiosis (CME), caused by the rickettsia Ehrlichia canis, an important canine disease with a worldwide distribution. Diagnosis of the disease can be challenging due to its different phases and multiple clinical manifestations. CME should be suspected when a compatible history (living in or traveling to an endemic region, previous tick exposure), typical clinical signs and characteristic hematological and biochemical abnormalities are present. Traditional diagnostic techniques including hematology, cytology, serology and isolation are valuable diagnostic tools for CME, however a definitive diagnosis of E. canis infection requires molecular techniques. This article reviews the current literature covering the diagnosis of infection caused by E. canis.",
"title": ""
},
{
"docid": "4074b8cd9b869a7a57f2697b97139308",
"text": "The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points in a similarity space and concepts are represented by convex regions in this space. After pointing out a problem with the convexity requirement, we propose a formalization of conceptual spaces based on fuzzy star-shaped sets. Our formalization uses a parametric definition of concepts and extends the original framework by adding means to represent correlations between different domains in a geometric way. Moreover, we define various operations for our formalization, both for creating new concepts from old ones and for measuring relations between concepts. We present an illustrative toy-example and sketch a research project on concept formation that is based on both our formalization and its implementation.",
"title": ""
},
{
"docid": "559e5a5da1f0a924fc432e7f4c3548bd",
"text": "Deep learning is recently showing outstanding results for solving a wide variety of robotic tasks in the areas of perception, planning, localization, and control. Its excellent capabilities for learning representations from the complex data acquired in real environments make it extremely suitable for many kinds of autonomous robotic applications. In parallel, Unmanned Aerial Vehicles (UAVs) are currently being extensively applied for several types of civilian tasks in applications going from security, surveillance, and disaster rescue to parcel delivery or warehouse management. In this paper, a thorough review has been performed on recent reported uses and applications of deep learning forUAVs, including themost relevant developments as well as their performances and limitations. In addition, a detailed explanation of the main deep learning techniques is provided. We conclude with a description of the main challenges for the application of deep learning for UAV-based solutions.",
"title": ""
},
{
"docid": "be06f51778191cf3b4a97b25c367575e",
"text": "Wireless sensor networks are gaining more and more attention these days. They gave us the chance of collecting data from noisy environment. So it becomes possible to obtain precise and continuous monitoring of different phenomenons. However wireless Sensor Network (WSN) is affected by many anomalies that occur due to software or hardware problems. So various protocols are developed in order to detect and localize faults then distinguish the faulty node from the right one. In this paper we are concentrated on a specific type of faults in WSN which is the outlier. We are focus on the classification of data (outlier and normal) using three different methods of machine learning then we compare between them. These methods are validated using real data obtained from motes deployed in an actual living lab.",
"title": ""
},
{
"docid": "5898f4adaf86393972bcbf4c4ab91540",
"text": "This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study.",
"title": ""
}
] | scidocsrr |
c948dafacf3bd2626a6a86f858604ff2 | Monitoring endurance running performance using cardiac parasympathetic function | [
{
"docid": "4428705a7eab914db00a38a57fb9199e",
"text": "Physiological testing of elite athletes requires the correct identification and assessment of sports-specific underlying factors. It is now recognised that performance in long-distance events is determined by maximal oxygen uptake (V(2 max)), energy cost of exercise and the maximal fractional utilisation of V(2 max) in any realised performance or as a corollary a set percentage of V(2 max) that could be endured as long as possible. This later ability is defined as endurance, and more precisely aerobic endurance, since V(2 max) sets the upper limit of aerobic pathway. It should be distinguished from endurance ability or endurance performance, which are synonymous with performance in long-distance events. The present review examines methods available in the literature to assess aerobic endurance. They are numerous and can be classified into two categories, namely direct and indirect methods. Direct methods bring together all indices that allow either a complete or a partial representation of the power-duration relationship, while indirect methods revolve around the determination of the so-called anaerobic threshold (AT). With regard to direct methods, performance in a series of tests provides a more complete and presumably more valid description of the power-duration relationship than performance in a single test, even if both approaches are well correlated with each other. However, the question remains open to determine which systems model should be employed among the several available in the literature, and how to use them in the prescription of training intensities. As for indirect methods, there is quantitative accumulation of data supporting the utilisation of the AT to assess aerobic endurance and to prescribe training intensities. However, it appears that: there is no unique intensity corresponding to the AT, since criteria available in the literature provide inconsistent results; and the non-invasive determination of the AT using ventilatory and heart rate data instead of blood lactate concentration ([La(-)](b)) is not valid. Added to the fact that the AT may not represent the optimal training intensity for elite athletes, it raises doubt on the usefulness of this theory without questioning, however, the usefulness of the whole [La(-)](b)-power curve to assess aerobic endurance and predict performance in long-distance events.",
"title": ""
}
] | [
{
"docid": "dbc66199d6873d990a8df18ce7adf01d",
"text": "Facebook has rapidly become the most popular Social Networking Site (SNS) among faculty and students in higher education institutions in recent years. Due to the various interactive and collaborative features Facebook supports, it offers great opportunities for higher education institutions to support student engagement and improve different aspects of teaching and learning. To understand the social aspects of Facebook use among students and how they perceive using it for academic purposes, an exploratory survey has been distributed to 105 local and international students at a large public technology university in Malaysia. Results reveal consistent patterns of usage compared to what has been reported in literature reviews in relation to the intent of use and the current use for educational purposes. A comparison was conducted of male and female, international and local, postgraduate and undergraduate students respectively, using nonparametric tests. The results indicate that the students’ perception of using Facebook for academic purposes is not significantly related to students’ gender or students’ background; while it is significantly related to study level and students’ experience. Moreover, based on the overall results of the survey and literature reviews, the paper presents recommendations and suggestions for further research of social networking in a higher education context.",
"title": ""
},
{
"docid": "c26abad7f3396faa798a74cfb23e6528",
"text": "Recent advances in seismic sensor technology, data acquisition systems, digital communications, and computer hardware and software make it possible to build reliable real-time earthquake information systems. Such systems provide a means for modern urban regions to cope effectively with the aftermath of major earthquakes and, in some cases, they may even provide warning, seconds before the arrival of seismic waves. In the long term these systems also provide basic data for mitigation strategies such as improved building codes.",
"title": ""
},
{
"docid": "2875373b63642ee842834a5360262f41",
"text": "Video stabilization techniques are essential for most hand-held captured videos due to high-frequency shakes. Several 2D-, 2.5D-, and 3D-based stabilization techniques have been presented previously, but to the best of our knowledge, no solutions based on deep neural networks had been proposed to date. The main reason for this omission is shortage in training data as well as the challenge of modeling the problem using neural networks. In this paper, we present a video stabilization technique using a convolutional neural network. Previous works usually propose an off-line algorithm that smoothes a holistic camera path based on feature matching. Instead, we focus on low-latency, real-time camera path smoothing that does not explicitly represent the camera path and does not use future frames. Our neural network model, called StabNet, learns a set of mesh-grid transformations progressively for each input frame from the previous set of stabilized camera frames and creates stable corresponding latent camera paths implicitly. To train the network, we collect a dataset of synchronized steady and unsteady video pairs via a specially designed hand-held hardware. Experimental results show that our proposed online method performs comparatively to the traditional off-line video stabilization methods without using future frames while running about 10 times faster. More importantly, our proposed StabNet is able to handle low-quality videos, such as night-scene videos, watermarked videos, blurry videos, and noisy videos, where the existing methods fail in feature extraction or matching.",
"title": ""
},
{
"docid": "2615f2f66adeaf1718d7afa5be3b32b1",
"text": "In this paper, an advanced design of an Autonomous Underwater Vehicle (AUV) is presented. The design is driven only by four water pumps. The different power combinations of the four motors provides the force and moment for propulsion and maneuvering. No control surfaces are needed in this design, which make the manufacturing cost of such a vehicle minimal and more reliable. Based on the propulsion method of the vehicle, a nonlinear AUV dynamic model is studied. This nonlinear model is linearized at the operation point. A control strategy of the AUV is proposed including attitude control and auto-pilot design. Simulation results for the attitude control loop are presented to validate this approach.",
"title": ""
},
{
"docid": "8b63800da2019180d266297647e3dbc0",
"text": "Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the class-probability distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. A central idea is the concept of context: a set of contiguous examples where the distribution is stationary. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error wil decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example kw, and the drift level at example kd. This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since kw. The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and also with learning the new concept. We also observe that the method is independent of the learning algorithm.",
"title": ""
},
{
"docid": "1f8115529218a17313032a88467ccc64",
"text": "s on Human Factors in Computing Systems (pp. 722–",
"title": ""
},
{
"docid": "27856dcc3b48bb86ca8bd3ca8b046385",
"text": "This paper provides evidence of the significant negative health externalities of traffic congestion. We exploit the introduction of electronic toll collection, or E-ZPass, which greatly reduced traffic congestion and emissions from motor vehicles in the vicinity of highway toll plazas. Specifically, we compare infants born to mothers living near toll plazas to infants born to mothers living near busy roadways but away from toll plazas with the idea that mothers living away from toll plazas did not experience significant reductions in local traffic congestion. We also examine differences in the health of infants born to the same mother, but who differ in terms of whether or not they were “exposed” to E-ZPass. We find that reductions in traffic congestion generated by E-ZPass reduced the incidence of prematurity and low birth weight among mothers within 2km of a toll plaza by 6.7-9.1% and 8.5-11.3% respectively, with larger effects for African-Americans, smokers, and those very close to toll plazas. There were no immediate changes in the characteristics of mothers or in housing prices in the vicinity of toll plazas that could explain these changes, and the results are robust to many changes in specification. The results suggest that traffic congestion is a significant contributor to poor health in affected infants. Estimates of the costs of traffic congestion should account for these important health externalities. * We are grateful to the MacArthur foundation for financial support. We thank Katherine Hempstead and Matthew Weinberg of the New Jersey Department of Health, and Craig Edelman of the Pennsylvania Department of Health for facilitating our access to the data. We are grateful to James MacKinnon and seminar participants at Harvard University, the University of Maryland, Queens University, Princeton University, the NBER Summer Institute, the SOLE/EALE 2010 meetings, Tulane University, and Uppsala University for helpful comments. All opinions and any errors are our own. Motor vehicles are a major source of air pollution. Nationally they are responsible for over 50% of carbon monoxide (CO), 34 percent of nitrogen oxide (NO2) and over 29 percent of hydrocarbon emissions in addition to as much as 10 percent of fine particulate matter emissions (Ernst et al., 2003). In urban areas, vehicles are the dominant source of these emissions. Furthermore, between 1980 and 2003 total vehicle miles traveled (VMT) in urban areas in the United States increased by 111% against an increase in urban lane-miles of only 51% (Bureau of Transportation Statistics, 2004). As a result, traffic congestion has steadily increased across the United States, causing 3.7 billion hours of delay by 2003 and wasting 2.3 billion gallons of motor fuel (Schrank and Lomax, 2005). Traditional estimates of the cost of congestion typically include delay costs (Vickrey, 1969), but they rarely address other congestion externalities such as the health effects of congestion. This paper seeks to provide estimates of the health effects of traffic congestion by examining the effect of a policy change that caused a sharp drop in congestion (and therefore in the level of local motor vehicle emissions) within a relatively short time frame at different sites across the northeastern United States. Engineering studies suggest that the introduction of electronic toll collection (ETC) technology, called E-ZPass in the Northeast, sharply reduced delays at toll plazas and pollution caused by idling, decelerating, and accelerating. We study the effect of E-ZPass, and thus the sharp reductions in local traffic congestion, on the health of infants born to mothers living near toll plazas. This question is of interest for three reasons. First, there is increasing evidence of the long-term effects of poor health at birth on future outcomes. For example, low birth weight has been linked to future health problems and lower educational attainment (see Currie (2009) for a summary of this research). The debate over the costs and benefits of emission controls and traffic congestion policies could be significantly impacted by evidence that traffic congestion has a deleterious effect on fetal health. Second, the study of newborns overcomes several difficulties in making the connection between pollution and health because, unlike adult diseases that may reflect pollution exposure that occurred many years ago, the link between cause and effect is immediate. Third, E-ZPass is an interesting policy experiment because, while pollution control was an important consideration for policy makers, the main motive for consumers to sign up for E-ZPass is to reduce travel time. Hence, E-ZPass offers an example of achieving reductions in pollution by bundling emissions reductions with something consumers perhaps value more highly such as reduced travel time. Our analysis improves upon much of the previous research linking air pollution to fetal health as well as on the somewhat smaller literature focusing specifically on the relationship between residential proximity to busy roadways and poor pregnancy outcomes. Since air pollution is not randomly assigned, studies that attempt to compare health outcomes for populations exposed to differing pollution levels may not be adequately controlling for confounding determinants of health. Since air quality is capitalized into housing prices (see Chay and Greenstone, 2003) families with higher incomes or preferences for cleaner air are likely to sort into locations with better air quality, and failure to account for this sorting will lead to overestimates of the effects of pollution. Alternatively, pollution levels are higher in urban areas where there are often more educated individuals with better access to health care, which can cause underestimates of the true effects of pollution on health. In the absence of a randomized trial, we exploit a policy change that created large local and persistent reductions in traffic congestion and traffic related air emissions for certain segments along a highway. We compare the infant health outcomes of those living near an electronic toll plaza before and after implementation of E-ZPass to those living near a major highway but further away from a toll plaza. Specifically, we compare mothers within 2 kilometers of a toll plaza to mothers who are between 2 and 10 km from a toll plaza but still within 3 kilometers of a major highway before and after the adoption of E-ZPass in New Jersey and Pennsylvania. New Jersey and Pennsylvania provide a compelling setting for our particular research design. First, both New Jersey and Pennsylvania are heavily populated, with New Jersey being the most densely populated state in the United States and Pennsylvania being the sixth most populous state in the country. As a result, these two states have some of the busiest interstate systems in the country, systems that also happen to be densely surrounded by residential housing. Furthermore, we know the exact addresses of mothers, in contrast to many observational studies which approximate the individual’s location as the centroid of a geographic area or by computing average pollution levels within the geographic area. This information enables us to improve on the assignment of pollution exposure. Lastly, E-ZPass adoption and take up was extremely quick, and the reductions in congestion spillover to all automobiles, not just those registered with E-ZPass (New Jersey Transit Authority, 2001). Our difference-in-differences research design relies on the assumption that the characteristics of mothers near a toll plaza change over time in a way that is comparable to those of other mothers who live further away from a plaza but still close to a major highway. We test this assumption by examining the way that observable characteristics of the two groups of mothers and housing prices change before and after E-ZPass adoption. We also estimate a range of alternative specifications in an effort to control for unobserved characteristics of mothers and neighborhoods that could confound our estimates. We find significant effects on infant health. The difference-in-difference models suggest that prematurity fell by 6.7-9.16% among mothers within 2km of a toll plaza, while the incidence of low birth weight fell by 8.5-11.3%. We argue that these are large but not implausible effects given previous studies. In contrast, we find that there are no significant effects of E-ZPass adoption on the demographic characteristics of mothers in the vicinity of a toll plaza. We also find no immediate effect on housing prices, suggesting that the composition of women giving birth near toll plazas shows little change in the immediate aftermath of E-ZPass adoption (though of course it might change more over time). The rest of the paper is laid out as follows: Section I provides necessary background. Section II describes our methods, while data are described in Section III. Section IV presents our results. Section VI discusses the magnitude of the effects we find, and Section V details our conclusions. I. Background Many studies suggest an association between air pollution and fetal health. Mattison et al. (2003) and Glinianaia et al. (2004) summarize much of the literature. For more recent papers see for example Currie et al. (2009); Dugandzic et al. (2006); Huynh et al. (2006); Karr et al. (2009); Lee et al. (2008); Leem et al. (2006); Liu et al. (2007); Parker et al. (2005); Salam et al. (2005); Ritz et al. (2006); Wilhelm and Ritz (2005); Woodruff et al. (2008). Since traffic is a major contributor to air pollution, several studies have focused specifically on the effects of exposure to motor vehicle exhaust (see Wilhelm and Ritz (2003); Ponce et al. (2005); Brauer et 1 There is also a large literature linking air pollution and child health, some of it focusing on the effects of traffic on child health. See Schwartz (2004) and Glinianaia et al. (2004b) for reviews. ",
"title": ""
},
{
"docid": "fa404bb1a60c219933f1666552771ada",
"text": "A novel low voltage self-biased high swing cascode current mirror (SHCCM) employing bulk-driven NMOS transistors is proposed in this paper. The comparison with the conventional circuit reveals that the proposed bulk-driven circuit operates at lower voltages and provides enhanced bandwidth with improved output resistance. The proposed circuit is further modified by replacing the passive resistance by active MOS realization. Small signal analysis of the proposed and conventional SHCCM are carried out to show the improvement achieved through the proposed circuit. The circuits are simulated in standard SPICE 0.25 mm CMOS technology and simulated results are compared with the theoretically obtained results. To ensure robustness of the proposed SHCCM, simulation results of component tolerance and process variation have also been included. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "462f8689f7be66267bfb77f99352e93a",
"text": "Face recognition under variable pose and illumination is a challenging problem in computer vision tasks. In this paper, we solve this problem by proposing a new residual based deep face reconstruction neural network to extract discriminative pose-and-illumination-invariant (PII) features. Our deep model can change arbitrary pose and illumination face images to the frontal view with standard illumination. We propose a new triplet-loss training method instead of Euclidean loss to optimize our model, which has two advantages: a) The training triplets can be easily augmented by freely choosing combinations of labeled face images, in this way, overfitting can be avoided; b) The triplet-loss training makes the PII features more discriminative even when training samples have similar appearance. By using our PII features, we achieve 83.8% average recognition accuracy on MultiPIE face dataset which is competitive to the state-of-the-art face recognition methods.",
"title": ""
},
{
"docid": "edbbf1491e552346d42d39ebf90fc9fc",
"text": "The use of ICT in the classroom is very important for providing opportunities for students to learn to operate in an information age. Studying the obstacles to the use of ICT in education may assist educators to overcome these barriers and become successful technology adopters in the future. This paper provides a meta-analysis of the relevant literature that aims to present the perceived barriers to technology integration in science education. The findings indicate that teachers had a strong desire for to integrate ICT into education; but that, they encountered many barriers. The major barriers were lack of confidence, lack of competence, and lack of access to resources. Since confidence, competence and accessibility have been found to be the critical components of technology integration in schools, ICT resources including software and hardware, effective professional development, sufficient time, and technical support need to be provided to teachers. No one component in itself is sufficient to provide good teaching. However, the presence of all components increases the possibility of excellent integration of ICT in learning and teaching opportunities. Generally, this paper provides information and recommendation to those responsible for the integration of new technologies into science education.",
"title": ""
},
{
"docid": "eb4d350f389c6f046b81e4459fcb236c",
"text": "Customer relationship management (CRM) in business‐to‐business (B2B) e‐commerce Yun E. Zeng H. Joseph Wen David C. Yen Article information: To cite this document: Yun E. Zeng H. Joseph Wen David C. Yen, (2003),\"Customer relationship management (CRM) in business#to#business (B2B) e#commerce\", Information Management & Computer Security, Vol. 11 Iss 1 pp. 39 44 Permanent link to this document: http://dx.doi.org/10.1108/09685220310463722",
"title": ""
},
{
"docid": "fca372687a77fd27b8c56ed494a6628b",
"text": "Sentiment analysis is the computational study of opinions, sentiments, evaluations, attitudes, views and emotions expressed in text. It refers to a classification problem where the main focus is to predict the polarity of words and then classify them into positive or negative sentiment. Sentiment analysis over Twitter offers people a fast and effective way to measure the public's feelings towards their party and politicians. The primary issues in previous sentiment analysis techniques are classification accuracy, as they incorrectly classify most of the tweets with the biasing towards the training data. In opinion texts, lexical content alone also can be misleading. Therefore, here we adopt a lexicon based sentiment analysis method, which will exploit the sense definitions, as semantic indicators of sentiment. Here we propose a novel approach for accurate sentiment classification of twitter messages using lexical resources SentiWordNet and WordNet along with Word Sense Disambiguation. Thus we applied the SentiWordNet lexical resource and Word Sense Disambiguation for finding political sentiment from real time tweets. Our method also uses a negation handling as a pre-processing step in order to achieve high accuracy.",
"title": ""
},
{
"docid": "f1c5f6f2bdff251e91df1dbd1e2302b2",
"text": "In this paper, mathematical models for permutation flow shop scheduling and job shop scheduling problems are proposed. The first problem is based on a mixed integer programming model. As the problem is NP-complete, this model can only be used for smaller instances where an optimal solution can be computed. For large instances, another model is proposed which is suitable for solving the problem by stochastic heuristic methods. For the job shop scheduling problem, a mathematical model and its main representation schemes are presented. Keywords—Flow shop, job shop, mixed integer model, representation scheme.",
"title": ""
},
{
"docid": "63b2bc943743d5b8ef9220fd672df84f",
"text": "In multiagent systems, we often have a set of agents each of which have a preference ordering over a set of items and one would like to know these preference orderings for various tasks, for example, data analysis, preference aggregation, voting etc. However, we often have a large number of items which makes it impractical to ask the agents for their complete preference ordering. In such scenarios, we usually elicit these agents’ preferences by asking (a hopefully small number of) comparison queries — asking an agent to compare two items. Prior works on preference elicitation focus on unrestricted domain and the domain of single peaked preferences and show that the preferences in single peaked domain can be elicited by much less number of queries compared to unrestricted domain. We extend this line of research and study preference elicitation for single peaked preferences on trees which is a strict superset of the domain of single peaked preferences. We show that the query complexity crucially depends on the number of leaves, the path cover number, and the distance from path of the underlying single peaked tree, whereas the other natural parameters like maximum degree, diameter, pathwidth do not play any direct role in determining query complexity. We then investigate the query complexity for finding a weak Condorcet winner for preferences single peaked on a tree and show that this task has much less query complexity than preference elicitation. Here again we observe that the number of leaves in the underlying single peaked tree and the path cover number of the tree influence the query complexity of the problem.",
"title": ""
},
{
"docid": "6724f1e8a34a6d9f64a30061ce7f67c0",
"text": "Mental contrasting with implementation intentions (MCII) has been found to improve selfregulation across many life domains. The present research investigates whether MCII can benefit time management. In Study 1, we asked students to apply MCII to a pressing academic problem and assessed how they scheduled their time for the upcoming week. MCII participants scheduled more time than control participants who in their thoughts either reflected on similar contents using different cognitive procedures (content control group) or applied the same cognitive procedures on different contents (format control group). In Study 2, students were taught MCII as a metacognitive strategy to be used on any upcoming concerns of the subsequent week. As compared to the week prior to the training, students in the MCII (vs. format control) condition improved in self-reported time management. In Study 3, MCII (vs. format control) helped working mothers who enrolled in a vocational business program to attend classes more regularly. The findings suggest that performing MCII on one’s everyday concerns improves time management.",
"title": ""
},
{
"docid": "cdfec1296a168318f773bb7ef0bfb307",
"text": "Today service markets are becoming business reality as for example Amazon's EC2 spot market. However, current research focusses on simplified consumer-provider service markets only. Taxes are an important market element which has not been considered yet for service markets. This paper introduces and evaluates the effects of tax systems for IaaS markets which trade virtual machines. As a digital good with well defined characteristics like storage or processing power a virtual machine can be taxed by the tax authority using different tax systems. Currently the value added tax is widely used for taxing virtual machines only. The main contribution of the paper is the so called CloudTax component, a framework to simulate and evaluate different tax systems on service markets. It allows to introduce economical principles and phenomenons like the Laffer Curve or tax incidences. The CloudTax component is based on the CloudSim simulation framework using the Bazaar-Extension for comprehensive economic simulations. We show that tax mechanisms strongly influence the efficiency of negotiation processes in the Cloud market.",
"title": ""
},
{
"docid": "ca932a0b6b71f009f95bad6f2f3f8a38",
"text": "Page 13 Supply chain management is increasingly being recognized as the integration of key business processes across the supply chain. For example, Hammer argues that now that companies have implemented processes within the firm, they need to integrate them between firms: Streamlining cross-company processes is the next great frontier for reducing costs, enhancing quality, and speeding operations. It is where this decade’s productivity wars will be fought. The victors will be those companies that are able to take a new approach to business, working closely with partners to design and manage processes that extend across traditional corporate boundaries. They will be the ones that make the leap from efficiency to super efficiency [1]. Monczka and Morgan also focus on the importance of process integration in supply chain management [2]. The piece that seems to be missing from the literature is a comprehensive definition of the processes that constitute supply chain management. How can companies achieve supply chain integration if there is not a common understanding of the key business processes? It seems that in order to build links between supply chain members it is necessary for companies to implement a standard set of supply chain processes. Practitioners and educators need a common definition of supply chain management, and a shared understanding of the processes. We recommend the definition of supply chain management developed and used by The Global Supply Chain Forum: Supply Chain Management is the integration of key business processes from end user through original suppliers that provides products, services, and information that add value for customers and other stakeholders [3]. The Forum members identified eight key processes that need to be implemented within and across firms in the supply chain. To date, The Supply Chain Management Processes",
"title": ""
},
{
"docid": "c4d4cb398cfa5cbae37879c385a9a6ed",
"text": "Performing large-scale malware classification is increasingly becoming a critical step in malware analytics as the number and variety of malware samples is rapidly growing. Statistical machine learning constitutes an appealing method to cope with this increase as it can use mathematical tools to extract information out of large-scale datasets and produce interpretable models. This has motivated a surge of scientific work in developing machine learning methods for detection and classification of malicious executables. However, an optimal method for extracting the most informative features for different malware families, with the final goal of malware classification, is yet to be found. Fortunately, neural networks have evolved to the state that they can surpass the limitations of other methods in terms of hierarchical feature extraction. Consequently, neural networks can now offer superior classification accuracy in many domains such as computer vision and natural language processing. In this paper, we transfer the performance improvements achieved in the area of neural networks to model the execution sequences of disassembled malicious binaries. We implement a neural network that consists of convolutional and feedforward neural constructs. This architecture embodies a hierarchical feature extraction approach that combines convolution of n-grams of instructions with plain vectorization of features derived from the headers of the Portable Executable (PE) files. Our evaluation results demonstrate that our approach outperforms baseline methods, such as simple Feedforward Neural Networks and Support Vector Machines, as we achieve 93% on precision and recall, even in case of obfuscations in the data.",
"title": ""
},
{
"docid": "60bdd255a19784ed2d19550222e61b69",
"text": "Haptic feedback on touch-sensitive displays provides significant benefits in terms of reducing error rates, increasing interaction speed and minimizing visual distraction. This particularly holds true for multitasking situations such as the interaction with mobile devices or touch-based in-vehicle systems. In this paper, we explore how the interaction with tactile touchscreens can be modeled and enriched using a 2+1 state transition model. The model expands an approach presented by Buxton. We present HapTouch -- a force-sensitive touchscreen device with haptic feedback that allows the user to explore and manipulate interactive elements using the sense of touch. We describe the results of a preliminary quantitative study to investigate the effects of tactile feedback on the driver's visual attention, driving performance and operating error rate. In particular, we focus on how active tactile feedback allows the accurate interaction with small on-screen elements during driving. Our results show significantly reduced error rates and input time when haptic feedback is given.",
"title": ""
},
{
"docid": "3bde393992b3055083e7348d360f7ec5",
"text": "A new smart power switch for industrial, automotive and computer applications developed in BCD (Bipolar, CMOS, DMOS) technology is described. It consists of an on-chip 70 mΩ power DMOS transistor connected in high side configuration and its driver makes the device virtually indestructible and suitable to drive any kind of load with an output current of 2.5 A. If the load is inductive, an internal voltage clamp allows fast demagnetization down to 55 V under the supply voltage. The device includes novel structures for the driver, the fully integrated charge pump circuit and its oscillator. These circuits have specifically been designed to reduce ElectroMagnetic Interference (EMI) thanks to an accurate control of the output voltage slope and the reduction of the output voltage ripple caused by the charge pump itself (several patents pending). An innovative open load circuit allows the detection of the open load condition with high precision (2 to 4 mA within the temperature range and including process spreads). The quiescent current has also been reduced to 600 uA. Diagnostics for CPU feedback is available at the external connections of the chip when the following fault conditions occur: open load; output short circuit to supply voltage; overload or output short circuit to ground; over temperature; under voltage supply.",
"title": ""
}
] | scidocsrr |
a38492ed7d3a6ca0d75054765f346f6f | Personalized Prognostic Models for Oncology: A Machine Learning Approach | [
{
"docid": "a88c0d45ca7859c050e5e76379f171e6",
"text": "Cancer and other chronic diseases have constituted (and will do so at an increasing pace) a significant portion of healthcare costs in the United States in recent years. Although prior research has shown that diagnostic and treatment recommendations might be altered based on the severity of comorbidities, chronic diseases are still being investigated in isolation from one another in most cases. To illustrate the significance of concurrent chronic diseases in the course of treatment, this study uses SEER’s cancer data to create two comorbid data sets: one for breast and female genital cancers and another for prostate and urinal cancers. Several popular machine learning techniques are then applied to the resultant data sets to build predictive models. Comparison of the results shows that having more information about comorbid conditions of patients can improve models’ predictive power, which in turn, can help practitioners make better diagnostic and treatment decisions. Therefore, proper identification, recording, and use of patients’ comorbidity status can potentially lower treatment costs and ease the healthcare related economic challenges.",
"title": ""
}
] | [
{
"docid": "30dffba83b24e835a083774aa91e6c59",
"text": "Wikipedia is one of the most popular sites on the Web, with millions of users relying on it to satisfy a broad range of information needs every day. Although it is crucial to understand what exactly these needs are in order to be able to meet them, little is currently known about why users visit Wikipedia. The goal of this paper is to fill this gap by combining a survey of Wikipedia readers with a log-based analysis of user activity. Based on an initial series of user surveys, we build a taxonomy of Wikipedia use cases along several dimensions, capturing users’ motivations to visit Wikipedia, the depth of knowledge they are seeking, and their knowledge of the topic of interest prior to visiting Wikipedia. Then, we quantify the prevalence of these use cases via a large-scale user survey conducted on live Wikipedia with almost 30,000 responses. Our analyses highlight the variety of factors driving users to Wikipedia, such as current events, media coverage of a topic, personal curiosity, work or school assignments, or boredom. Finally, we match survey responses to the respondents’ digital traces in Wikipedia’s server logs, enabling the discovery of behavioral patterns associated with specific use cases. For instance, we observe long and fast-paced page sequences across topics for users who are bored or exploring randomly, whereas those using Wikipedia for work or school spend more time on individual articles focused on topics such as science. Our findings advance our understanding of reader motivations and behavior on Wikipedia and can have implications for developers aiming to improve Wikipedia’s user experience, editors striving to cater to their readers’ needs, third-party services (such as search engines) providing access to Wikipedia content, and researchers aiming to build tools such as recommendation engines.",
"title": ""
},
{
"docid": "3aa4fd13689907ae236bd66c8a7ed8c8",
"text": "Biomedical named entity recognition(BNER) is a crucial initial step of information extraction in biomedical domain. The task is typically modeled as a sequence labeling problem. Various machine learning algorithms, such as Conditional Random Fields (CRFs), have been successfully used for this task. However, these state-of-the-art BNER systems largely depend on hand-crafted features. We present a recurrent neural network (RNN) framework based on word embeddings and character representation. On top of the neural network architecture, we use a CRF layer to jointly decode labels for the whole sentence. In our approach, contextual information from both directions and long-range dependencies in the sequence, which is useful for this task, can be well modeled by bidirectional variation and long short-term memory (LSTM) unit, respectively. Although our models use word embeddings and character embeddings as the only features, the bidirectional LSTM-RNN (BLSTM-RNN) model achieves state-of-the-art performance — 86.55% F1 on BioCreative II gene mention (GM) corpus and 73.79% F1 on JNLPBA 2004 corpus. Our neural network architecture can be successfully used for BNER without any manual feature engineering. Experimental results show that domain-specific pre-trained word embeddings and character-level representation can improve the performance of the LSTM-RNN models. On the GM corpus, we achieve comparable performance compared with other systems using complex hand-crafted features. Considering the JNLPBA corpus, our model achieves the best results, outperforming the previously top performing systems. The source code of our method is freely available under GPL at https://github.com/lvchen1989/BNER .",
"title": ""
},
{
"docid": "eb6572344dbaf8e209388f888fba1c10",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "955c7d91d4463fc50feb93320b7c370c",
"text": "The major problem in the use of the Web is that of searching for relevant information that meets the expectations of a user. This problem increases every day and especially with the emergence of web 2.0 or social web. Our paper, therefore, ignores the disadvantage of social web and operates it to rich user profile.",
"title": ""
},
{
"docid": "96d6173f58e36039577c8e94329861b2",
"text": "Reverse Turing tests, or CAPTCHAs, have become an ubiquitous defense used to protect open Web resources from being exploited at scale. An effective CAPTCHA resists existing mechanistic software solving, yet can be solved with high probability by a human being. In response, a robust solving ecosystem has emerged, reselling both automated solving technology and realtime human labor to bypass these protections. Thus, CAPTCHAs can increasingly be understood and evaluated in purely economic terms; the market price of a solution vs the monetizable value of the asset being protected. We examine the market-side of this question in depth, analyzing the behavior and dynamics of CAPTCHA-solving service providers, their price performance, and the underlying labor markets driving this economy.",
"title": ""
},
{
"docid": "1cbf55610014ef23e4015c07f5846619",
"text": "Variation of the system parameters and external disturbances always happen in the CNC servo system. With a traditional PID controller, it will cause large overshoot or poor stability. In this paper, a fuzzy-PID controller is proposed in order to improve the performance of the servo system. The proposed controller incorporates the advantages of PID control which can eliminate the steady-state error, and the advantages of fuzzy logic such as simple design, no need of an accurate mathematical model and some adaptability to nonlinearity and time-variation. The fuzzy-PID controller accepts the error (e) and error change(ec) as inputs ,while the parameters kp, ki, kd as outputs. Control rules of the controller are established based on experience so that self-regulation of the values of PID parameters is achieved. A simulation model of position servo system is constructed in Matlab/Simulink module based on a high-speed milling machine researched in our institute. By comparing the traditional PID controller and the fuzzy-PID controller, the simulation results show that the system has stronger robustness and disturbance rejection capability with the latter controller which can meet the performance requirements of the CNC position servo system better",
"title": ""
},
{
"docid": "e146a0534b5a81ac6f332332056ae58c",
"text": "Paraphrase identification is an important topic in artificial intelligence and this task is often tackled as sequence alignment and matching. Traditional alignment methods take advantage of attention mechanism, which is a soft-max weighting technique. Weighting technique could pick out the most similar/dissimilar parts, but is weak in modeling the aligned unmatched parts, which are the crucial evidence to identify paraphrase. In this paper, we empower neural architecture with Hungarian algorithm to extract the aligned unmatched parts. Specifically, first, our model applies BiLSTM to parse the input sentences into hidden representations. Then, Hungarian layer leverages the hidden representations to extract the aligned unmatched parts. Last, we apply cosine similarity to metric the aligned unmatched parts for a final discrimination. Extensive experiments show that our model outperforms other baselines, substantially and significantly.",
"title": ""
},
{
"docid": "cd35c6e2763b634d23de1903a3261c59",
"text": "We investigate the Belousov-Zhabotinsky (BZ) reaction in an attempt to establish a basis for computation using chemical oscillators coupled via inhibition. The system consists of BZ droplets suspended in oil. Interdrop coupling is governed by the non-polar communicator of inhibition, Br2. We consider a linear arrangement of three droplets to be a NOR gate, where the center droplet is the output and the other two are inputs. Oxidation spikes in the inputs, which we define to be TRUE, cause a delay in the next spike of the output, which we read to be FALSE. Conversely, when the inputs do not spike (FALSE) there is no delay in the output (TRUE), thus producing the behavior of a NOR gate. We are able to reliably produce NOR gates with this behavior in microfluidic experiment.",
"title": ""
},
{
"docid": "35ac15f19cefd103f984519e046e407c",
"text": "This paper presents a highly sensitive sensor for crack detection in metallic surfaces. The sensor is inspired by complementary split-ring resonators which have dimensions much smaller than the excitation’s wavelength. The entire sensor is etched in the ground plane of a microstrip line and fabricated using printed circuit board technology. Compared to available microwave techniques, the sensor introduced here has key advantages including high sensitivity, increased dynamic range, spatial resolution, design simplicity, selectivity, and scalability. Experimental measurements showed that a surface crack having 200-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> width and 2-mm depth gives a shift in the resonance frequency of 1.5 GHz. This resonance frequency shift exceeds what can be achieved using other sensors operating in the low GHz frequency regime by a significant margin. In addition, using numerical simulation, we showed that the new sensor is able to resolve a 10-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>-wide crack (equivalent to <inline-formula> <tex-math notation=\"LaTeX\">$\\lambda $ </tex-math></inline-formula>/4000) with 180-MHz shift in the resonance frequency.",
"title": ""
},
{
"docid": "bde1d85da7f1ac9c9c30b0fed448aac6",
"text": "We survey temporal description logics that are based on standard temporal logics such as LTL and CTL. In particular, we concentrate on the computational complexity of the satisfiability problem and algorithms for deciding it.",
"title": ""
},
{
"docid": "1b790d2a5b9d8f6a911efee43ee2a9d2",
"text": "Content Centric Networking (CCN) represents an important change in the current operation of the Internet, prioritizing content over the communication between end nodes. Routers play an essential role in CCN, since they receive the requests for a given content and provide content caching for the most popular ones. They have their own forwarding strategies and caching policies for the most popular contents. Despite the number of works on this field, experimental evaluation of different forwarding algorithms and caching policies yet demands a huge effort in routers programming. In this paper we propose SDCCN, a SDN approach to CCN that provides programmable forwarding strategy and caching policies. SDCCN allows fast prototyping and experimentation in CCN. Proofs of concept were performed to demonstrate the programmability of the cache replacement algorithms and the Strategy Layer. Experimental results, obtained through implementation in the Mininet environment, are presented and evaluated.",
"title": ""
},
{
"docid": "9bf26d0e444ab8332ac55ce87d1b7797",
"text": "Toll like receptors (TLR)s have a central role in regulating innate immunity and in the last decade studies have begun to reveal their significance in potentiating autoimmune diseases such as rheumatoid arthritis (RA). Earlier investigations have highlighted the importance of TLR2 and TLR4 function in RA pathogenesis. In this review, we discuss the newer data that indicate roles for TLR5 and TLR7 in RA and its preclinical models. We evaluate the pathogenicity of TLRs in RA myeloid cells, synovial tissue fibroblasts, T cells, osteoclast progenitor cells and endothelial cells. These observations establish that ligation of TLRs can transform RA myeloid cells into M1 macrophages and that the inflammatory factors secreted from M1 and RA synovial tissue fibroblasts participate in TH-17 cell development. From the investigations conducted in RA preclinical models, we conclude that TLR-mediated inflammation can result in osteoclastic bone erosion by interconnecting the myeloid and TH-17 cell response to joint vascularization. In light of emerging unique aspects of TLR function, we summarize the novel approaches that are being tested to impair TLR activation in RA patients.",
"title": ""
},
{
"docid": "2afb992058eb720ff0baf4216e3a22c2",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: Summary. — A longitudinal anthropological study of cotton farming in Warangal District of Andhra Pradesh, India, compares a group of villages before and after adoption of Bt cotton. It distinguishes \" field-level \" and \" farm-level \" impacts. During this five-year period yields rose by 18% overall, with greater increases among poor farmers with the least access to information. Insecticide sprayings dropped by 55%, although predation by non-target pests was rising. However shifting from the field to the historically-situated context of the farm recasts insect attacks as a symptom of larger problems in agricultural decision-making. Bt cotton's opponents have failed to recognize real benefits at the field level, while its backers have failed to recognize systemic problems that Bt cotton may exacerbate.",
"title": ""
},
{
"docid": "929f294583267ca8cb8616e803687f1e",
"text": "Recent systems for natural language understanding are strong at overcoming linguistic variability for lookup style reasoning. Yet, their accuracy drops dramatically as the number of reasoning steps increases. We present the first formal framework to study such empirical observations, addressing the ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space. Our formal model uses two interrelated spaces: a conceptual meaning space that is unambiguous and complete but hidden, and a linguistic symbol space that captures a noisy grounding of the meaning space in the symbols or words of a language. We apply this framework to study the connectivity problem in undirected graphs---a core reasoning problem that forms the basis for more complex multi-hop reasoning. We show that it is indeed possible to construct a high-quality algorithm for detecting connectivity in the (latent) meaning graph, based on an observed noisy symbol graph, as long as the noise is below our quantified noise level and only a few hops are needed. On the other hand, we also prove an impossibility result: if a query requires a large number (specifically, logarithmic in the size of the meaning graph) of hops, no reasoning system operating over the symbol graph is likely to recover any useful property of the meaning graph. This highlights a fundamental barrier for a class of reasoning problems and systems, and suggests the need to limit the distance between the two spaces, rather than investing in multi-hop reasoning with\"many\"hops.",
"title": ""
},
{
"docid": "f5c04016ea72c94437cb5baeb556b01d",
"text": "This paper reports the design of a three pass stemmer STHREE for Malayalam. The language is rich in morphological variations but poor in linguistic computational resources. The system returns the meaningful root word of the input word in 97% of the cases when tested with 1040 words. This is a significant improvement over the reported accuracy of SILPA system, the only known stemmer for Malayalam, with the same test data sets.",
"title": ""
},
{
"docid": "427028ef819df3851e37734e5d198424",
"text": "The code that provides solutions to key software requirements, such as security and fault-tolerance, tends to be spread throughout (or cross-cut) the program modules that implement the “primary functionality” of a software system. Aspect-oriented programming is an emerging programming paradigm that supports implementing such cross-cutting requirements into named program units called “aspects”. To construct a system as an aspect-oriented program (AOP), one develops code for primary functionality in traditional modules and code for cross-cutting functionality in aspect modules. Compiling and running an AOP requires that the aspect code be “woven” into the code. Although aspect-oriented programming supports the separation of concerns into named program units, explicit and implicit dependencies of both aspects and traditional modules will result in systems with new testing challenges, which include new sources for program faults. This paper introduces a candidate fault model, along with associated testing criteria, for AOPs based on interactions that are unique to AOPs. The paper also identifies key issues relevant to the systematic testing of AOPs.",
"title": ""
},
{
"docid": "5de517f8ccdbf12228ca334173ecf797",
"text": "This paper describes the Chinese handwriting recognition competition held at the 12th International Conference on Document Analysis and Recognition (ICDAR 2013). This third competition in the series again used the CASIAHWDB/OLHWDB databases as the training set, and all the submitted systems were evaluated on closed datasets to report character-level correct rates. This year, 10 groups submitted 27 systems for five tasks: classification on extracted features, online/offline isolated character recognition, online/offline handwritten text recognition. The best results (correct rates) are 93.89% for classification on extracted features, 94.77% for offline character recognition, 97.39% for online character recognition, 88.76% for offline text recognition, and 95.03% for online text recognition, respectively. In addition to the test results, we also provide short descriptions of the recognition methods and brief discussions on the results. Keywords—Chinese handwriting recognition competition; isolated character recongition; handwritten text recognition; offline; online; CASIA-HWDB/OLHWDB database.",
"title": ""
},
{
"docid": "924dbc783bf8743a28c2cd4563d50de9",
"text": "This paper studies the off-policy evaluation problem, where one aims to estimate the value of a target policy based on a sample of observations collected by another policy. We first consider the multi-armed bandit case, establish a minimax risk lower bound, and analyze the risk of two standard estimators. It is shown, and verified in simulation, that one is minimax optimal up to a constant, while another can be arbitrarily worse, despite its empirical success and popularity. The results are applied to related problems in contextual bandits and fixed-horizon Markov decision processes, and are also related to semi-supervised learning.",
"title": ""
},
{
"docid": "27ed0ab08b10935d12b59b6d24bed3f1",
"text": "A major stumbling block to progress in understanding basic human interactions, such as getting out of bed or opening a refrigerator, is lack of good training data. Most past efforts have gathered this data explicitly: starting with a laundry list of action labels, and then querying search engines for videos tagged with each label. In this work, we do the reverse and search implicitly: we start with a large collection of interaction-rich video data and then annotate and analyze it. We use Internet Lifestyle Vlogs as the source of surprisingly large and diverse interaction data. We show that by collecting the data first, we are able to achieve greater scale and far greater diversity in terms of actions and actors. Additionally, our data exposes biases built into common explicitly gathered data. We make sense of our data by analyzing the central component of interaction - hands. We benchmark two tasks: identifying semantic object contact at the video level and non-semantic contact state at the frame level. We additionally demonstrate future prediction of hands.",
"title": ""
},
{
"docid": "fe3a2ef6ffc3e667f73b19f01c14d15a",
"text": "The study of socio-technical systems has been revolutionized by the unprecedented amount of digital records that are constantly being produced by human activities such as accessing Internet services, using mobile devices, and consuming energy and knowledge. In this paper, we describe the richest open multi-source dataset ever released on two geographical areas. The dataset is composed of telecommunications, weather, news, social networks and electricity data from the city of Milan and the Province of Trentino. The unique multi-source composition of the dataset makes it an ideal testbed for methodologies and approaches aimed at tackling a wide range of problems including energy consumption, mobility planning, tourist and migrant flows, urban structures and interactions, event detection, urban well-being and many others.",
"title": ""
}
] | scidocsrr |
c2e0f5a2362d741cd300ba72025cf93b | Automatic detection of cyberbullying in social media text | [
{
"docid": "c447e34a5048c7fe2d731aaa77b87dd3",
"text": "Bullying, in both physical and cyber worlds, has been recognized as a serious health issue among adolescents. Given its significance, scholars are charged with identifying factors that influence bullying involvement in a timely fashion. However, previous social studies of bullying are handicapped by data scarcity. The standard psychological science approach to studying bullying is to conduct personal surveys in schools. The sample size is typically in the hundreds, and these surveys are often collected only once. On the other hand, the few computational studies narrowly restrict themselves to cyberbullying, which accounts for only a small fraction of all bullying episodes.",
"title": ""
},
{
"docid": "f91a507a9cb7bdee2e8c3c86924ced8d",
"text": "a r t i c l e i n f o It is often stated that bullying is a \" group process \" , and many researchers and policymakers share the belief that interventions against bullying should be targeted at the peer-group level rather than at individual bullies and victims. There is less insight into what in the group level should be changed and how, as the group processes taking place at the level of the peer clusters or school classes have not been much elaborated. This paper reviews the literature on the group involvement in bullying, thus providing insight into the individuals' motives for participation in bullying, the persistence of bullying, and the adjustment of victims across different peer contexts. Interventions targeting the peer group are briefly discussed and future directions for research on peer processes in bullying are suggested. Bullying is a subtype of aggressive behavior, in which an individual or a group of individuals repeatedly attacks, humiliates, and/or excludes a relatively powerless person. The majority of studies on the topic have been conducted in schools, focusing on bullying among the concept of bullying is used to refer to peer-to-peer bullying among school-aged children and youth, when not otherwise mentioned. It is known that a sizable minority of primary and secondary school students is involved in peer-to-peer bullying either as perpetrators or victims — or as both, being both bullied themselves and harassing others. In WHO's Health Behavior in School-Aged Children survey (HBSC, see Craig & Harel, 2004), the average prevalence of victims across the 35 countries involved was 11%, whereas bullies represented another 11%. Children who report both bullying others and being bullied by others (so-called bully–victims) were not identified in the HBSC study, but other studies have shown that approximately 4–6% of the children can be classified as bully–victims (Haynie et al., 2001; Nansel et al., 2001). Bullying constitutes a serious risk for the psychosocial and academic adjustment of both victims",
"title": ""
},
{
"docid": "f6df133663ab4342222d95a20cd09996",
"text": "Web 2.0 has led to the development and evolution of web-based communities and applications. These communities provide places for information sharing and collaboration. They also open the door for inappropriate online activities, such as harassment, in which some users post messages in a virtual community that are intentionally offensive to other members of the community. It is a new and challenging task to detect online harassment; currently few systems attempt to solve this problem. In this paper, we use a supervised learning approach for detecting harassment. Our technique employs content features, sentiment features, and contextual features of documents. The experimental results described herein show that our method achieves significant improvements over several baselines, including Term FrequencyInverse Document Frequency (TFIDF) approaches. Identification of online harassment is feasible when TFIDF is supplemented with sentiment and contextual feature attributes.",
"title": ""
}
] | [
{
"docid": "eb85cffda3aec56b77ae016ac6f73011",
"text": "This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent longrange associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization.",
"title": ""
},
{
"docid": "69d42340c09303b69eafb19de7170159",
"text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.",
"title": ""
},
{
"docid": "d50550fe203ffe135ef90dd0b20cd975",
"text": "The problem of automatically matching composite sketches to facial photographs is addressed in this paper. Previous research on sketch recognition focused on matching sketches drawn by professional artists who either looked directly at the subjects (viewed sketches) or used a verbal description of the subject's appearance as provided by an eyewitness (forensic sketches). Unlike sketches hand drawn by artists, composite sketches are synthesized using one of the several facial composite software systems available to law enforcement agencies. We propose a component-based representation (CBR) approach to measure the similarity between a composite sketch and mugshot photograph. Specifically, we first automatically detect facial landmarks in composite sketches and face photos using an active shape model (ASM). Features are then extracted for each facial component using multiscale local binary patterns (MLBPs), and per component similarity is calculated. Finally, the similarity scores obtained from individual facial components are fused together, yielding a similarity score between a composite sketch and a face photo. Matching performance is further improved by filtering the large gallery of mugshot images using gender information. Experimental results on matching 123 composite sketches against two galleries with 10,123 and 1,316 mugshots show that the proposed method achieves promising performance (rank-100 accuracies of 77.2% and 89.4%, respectively) compared to a leading commercial face recognition system (rank-100 accuracies of 22.8% and 52.0%) and densely sampled MLBP on holistic faces (rank-100 accuracies of 27.6% and 10.6%). We believe our prototype system will be of great value to law enforcement agencies in apprehending suspects in a timely fashion.",
"title": ""
},
{
"docid": "db252efe7bde6cc0d58e337f8ad04271",
"text": "Social skills training is a well-established method to decrease human anxiety and discomfort in social interaction, and acquire social skills. In this paper, we attempt to automate the process of social skills training by developing a dialogue system named \"automated social skills trainer,\" which provides social skills training through human-computer interaction. The system includes a virtual avatar that recognizes user speech and language information and gives feedback to users to improve their social skills. Its design is based on conventional social skills training performed by human participants, including defining target skills, modeling, role-play, feedback, reinforcement, and homework. An experimental evaluation measuring the relationship between social skill and speech and language features shows that these features have a relationship with autistic traits. Additional experiments measuring the effect of performing social skills training with the proposed application show that most participants improve their skill by using the system for 50 minutes.",
"title": ""
},
{
"docid": "66451aa5a41ec7f9246d749c0983fa60",
"text": "A new method for automatically acquiring case frame patterns from large corpora is proposed. In particular, the problem of generalizing values of a case frame slot for a verb is viewed as that of estimating a conditional probability distribution over a partition of words, and a new generalization method based on the Minimum Description Length (MDL) principle is proposed. In order to assist with efficiency, the proposed method makes use of an existing thesaurus and restricts its attention to those partitions that are present as \"cuts\" in the thesaurus tree, thus reducing the generalization problem to that of estimating a \"tree cut model\" of the thesaurus tree. An efficient algorithm is given, which provably obtains the optimal tree cut model for the given frequency data of a case slot, in the sense of MDL. Case frame patterns obtained by the method were used to resolve PP-attachment ambiguity. Experimental results indicate that the proposed method improves upon or is at least comparable with existing methods.",
"title": ""
},
{
"docid": "c9acadfba9aa66ef6e7f4bc1d86943f6",
"text": "We propose a new saliency detection model by combining global information from frequency domain analysis and local information from spatial domain analysis. In the frequency domain analysis, instead of modeling salient regions, we model the nonsalient regions using global information; these so-called repeating patterns that are not distinctive in the scene are suppressed by using spectrum smoothing. In spatial domain analysis, we enhance those regions that are more informative by using a center-surround mechanism similar to that found in the visual cortex. Finally, the outputs from these two channels are combined to produce the saliency map. We demonstrate that the proposed model has the ability to highlight both small and large salient regions in cluttered scenes and to inhibit repeating objects. Experimental results also show that the proposed model outperforms existing algorithms in predicting objects regions where human pay more attention.",
"title": ""
},
{
"docid": "20ac5cea816906d595a65915680575f2",
"text": "A combination of distributed computation, positive feedback and constructive greedy heuristic is proposed as a new approach to stochastic optimization and problem solving. Positive feedback accounts for rapid discovery of very good solutions, distributed computation avoids premature convergence, and greedy heuristic helps the procedure to find acceptable solutions in the early stages of the search process. An application of the proposed methodology to the classical travelling salesman problem shows that the system can rapidly provide very good, if not optimal, solutions. We report on many simulation results and discuss the working of the algorithm. Some hints about how this approach can be applied to a variety of optimization problems are also given.",
"title": ""
},
{
"docid": "b829049a8abf47f8f13595ca54eaa009",
"text": "This paper describes a face recognition-based people tracking and re-identification system for RGB-D camera networks. The system tracks people and learns their faces online to keep track of their identities even if they move out from the camera's field of view once. For robust people re-identification, the system exploits the combination of a deep neural network- based face representation and a Bayesian inference-based face classification method. The system also provides a predefined people identification capability: it associates the online learned faces with predefined people face images and names to know the people's whereabouts, thus, allowing a rich human-system interaction. Through experiments, we validate the re-identification and the predefined people identification capabilities of the system and show an example of the integration of the system with a mobile robot. The overall system is built as a Robot Operating System (ROS) module. As a result, it simplifies the integration with the many existing robotic systems and algorithms which use such middleware. The code of this work has been released as open-source in order to provide a baseline for the future publications in this field.",
"title": ""
},
{
"docid": "101554958aedffeaa26e429fca84e661",
"text": "Many healthcare reforms are to digitalize and integrate healthcare information systems. However, the disparity of business benefits in having an integrated healthcare information system (IHIS) varies with organizational fit factors. Critical success factors (CSFs) exist for hospitals to implement an IHIS successfully. This study investigated the relationship between the organizational fit and the system success. In addition, we examined the moderating effect of five CSFs -information systems adjustment, business process adjustment, organizational resistance, top management support, and the capability of key team members – in an IHIS implementation. Fifty-three hospitals that have successfully undertaken IHIS projects participated in this study. We used regression analysis to assess the relationships. The findings of this study provide a roadmap for hospitals to capitalize on the organizational fit and the five critical success factors in order to implement successful IHIS projects. Shin-Yuan Hung, Charlie Chen, Kuan-Hsiuang Wang (2014) \"Critical Success Factors For The Implementation Of Integrated Healthcare Information Systems Projects: An Organizational Fit Perspective\" Communication of the Association for Information Systems volume 34 Article 39 Version of record Available @ www.aisel.aisnet.org",
"title": ""
},
{
"docid": "fdd4c5fc773aa001da927ab3776559ae",
"text": "We treated a 65-year-old Japanese man with a giant penile lymphedema due to chronic penile strangulation with a rubber band. He was referred to our hospital with progressive penile swelling that had developed over a period of 2 years from chronic use of a rubber band placed around the penile base for prevention of urinary incontinence. Under a diagnosis of giant penile lymphedema, we performed resection of abnormal penile skin weighing 4.8 kg, followed by a penile plasty procedure. To the best of our knowledge, this is only the seventh report of such a case worldwide, with the present giant penile lymphedema the most reported.",
"title": ""
},
{
"docid": "624806aa09127fbca2e01c9d52b5764a",
"text": "Over the last few years, increased interest has arisen with respect to age-related tasks in the Computer Vision community. As a result, several \"in-the-wild\" databases annotated with respect to the age attribute became available in the literature. Nevertheless, one major drawback of these databases is that they are semi-automatically collected and annotated and thus they contain noisy labels. Therefore, the algorithms that are evaluated in such databases are prone to noisy estimates. In order to overcome such drawbacks, we present in this paper the first, to the best of knowledge, manually collected \"in-the-wild\" age database, dubbed AgeDB, containing images annotated with accurate to the year, noise-free labels. As demonstrated by a series of experiments utilizing state-of-the-art algorithms, this unique property renders AgeDB suitable when performing experiments on age-invariant face verification, age estimation and face age progression \"in-the-wild\".",
"title": ""
},
{
"docid": "2acb16f1e67f141220dc05b90ac23385",
"text": "By combining patch-clamp methods with two-photon microscopy, it is possible to target recordings to specific classes of neurons in vivo. Here we describe methods for imaging and recording from the soma and dendrites of neurons identified using genetically encoded probes such as green fluorescent protein (GFP) or functional indicators such as Oregon Green BAPTA-1. Two-photon targeted patching can also be adapted for use with wild-type brains by perfusing the extracellular space with a membrane-impermeable dye to visualize the cells by their negative image and target them for electrical recordings, a technique termed \"shadowpatching.\" We discuss how these approaches can be adapted for single-cell electroporation to manipulate specific cells genetically. These approaches thus permit the recording and manipulation of rare genetically, morphologically, and functionally distinct subsets of neurons in the intact nervous system.",
"title": ""
},
{
"docid": "bbedbe2d901f63e3f163ea0f24a2e2d7",
"text": "a r t i c l e i n f o a b s t r a c t The leader trait perspective is perhaps the most venerable intellectual tradition in leadership research. Despite its early prominence in leadership research, it quickly fell out of favor among leadership scholars. Thus, despite recent empirical support for the perspective, conceptual work in the area lags behind other theoretical perspectives. Accordingly, the present review attempts to place the leader trait perspective in the context of supporting intellectual traditions, including evolutionary psychology and behavioral genetics. We present a conceptual model that considers the source of leader traits, mediators and moderators of their effects on leader emergence and leadership effectiveness, and distinguish between perceived and actual leadership effectiveness. We consider both the positive and negative effects of specific \" bright side \" personality traits: the Big Five traits, core self-evaluations, intelligence, and charisma. We also consider the positive and negative effects of \" dark side \" leader traits: Narcissism, hubris, dominance, and Machiavellianism. If one sought to find singular conditions that existed across species, one might find few universals. One universal that does exist, at least those species that have brains and nervous systems, is leadership. From insects to reptiles to mammals, leadership exists as surely as collective activity exists. There is the queen bee, and there is the alpha male. Though the centrality of leadership may vary by species (it seems more important to mammals than, say, to avians and reptiles), it is fair to surmise that whenever there is social activity, a social structure develops, and one (perhaps the) defining characteristic of that structure is the emergence of a leader or leaders. The universality of leadership, however, does not deny the importance of individual differences — indeed the emergence of leadership itself is proof of individual differences. Moreover, even casual observation of animal (including human) collective behavior shows the existence of a leader. Among a herd of 100 cattle or a pride of 20 lions, one is able to detect a leadership structure (especially at times of eating, mating, and attack). One quickly wonders: What has caused this leadership structure to emerge? Why has one animal (the alpha) emerged to lead the collective? And how does this leadership cause this collective to flourish — or founder? Given these questions, it is of no surprise that the earliest conceptions of leadership focused on individual …",
"title": ""
},
{
"docid": "d906d31f32ad89a843645cad98eab700",
"text": "Deep Learning has led to a dramatic leap in SuperResolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the low-resolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce \"Zero-Shot\" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method.",
"title": ""
},
{
"docid": "5d2c1095a34ee582f490f4b0392a3da0",
"text": "We study the problem of online learning to re-rank, where users provide feedback to improve the quality of displayed lists. Learning to rank has been traditionally studied in two settings. In the offline setting, rankers are typically learned from relevance labels of judges. These approaches have become the industry standard. However, they lack exploration, and thus are limited by the information content of offline data. In the online setting, an algorithm can propose a list and learn from the feedback on it in a sequential fashion. Bandit algorithms developed for this setting actively experiment, and in this way overcome the biases of offline data. But they also tend to ignore offline data, which results in a high initial cost of exploration. We propose BubbleRank, a bandit algorithm for re-ranking that combines the strengths of both settings. The algorithm starts with an initial base list and improves it gradually by swapping higher-ranked less attractive items for lower-ranked more attractive items. We prove an upper bound on the n-step regret of BubbleRank that degrades gracefully with the quality of the initial base list. Our theoretical findings are supported by extensive numerical experiments on a large real-world click dataset.",
"title": ""
},
{
"docid": "6442c9e4eb9034abf90fcd697c32a343",
"text": "With the increasing popularity and demand for mobile applications, there has been a significant increase in the number of mobile application development projects. Highly volatile requirements of mobile applications require adaptive software development methods. The Agile approach is seen as a natural fit for mobile application and there is a need to explore various Agile methodologies for the development of mobile applications. This paper evaluates how adopting various Agile approaches improves the development of mobile applications and if they can be used in order to provide more tailor-made process improvements within an organization. A survey related to mobile application development process improvement was developed. The use of various Agile approaches for success in mobile application development were evaluated by determining the significance of the most used Agile engineering paradigms such as XP, Scrum, and Lean. The findings of the study show that these Agile methods have the potential to help deliver enhanced speed and quality for mobile application development.",
"title": ""
},
{
"docid": "13fc420d1fa63445c29c4107734e2943",
"text": "As technology advances, more and more devices have Internet access. This gives rise to the Internet of Things. With all these new devices connected to the Internet, cybercriminals are undoubtedly trying to take advantage of these devices, especially when they have poor protection. These botnets will have a large amount of processing power in the near future. This paper will elaborate on how much processing power these IoT botnets can gain and to what extend cryptocurrencies will be influenced by it. This will be done through a literature study which is validated through an experiment.",
"title": ""
},
{
"docid": "74b163a2c2f149dce9850c6ff5d7f1f6",
"text": "The vast majority of cutaneous canine nonepitheliotropic lymphomas are of T cell origin. Nonepithelial Bcell lymphomas are extremely rare. The present case report describes a 10-year-old male Golden retriever that was presented with slowly progressive nodular skin lesions on the trunk and limbs. Histopathology of skin biopsies revealed small periadnexal dermal nodules composed of rather pleomorphic round cells with round or contorted nuclei. The diagnosis of nonepitheliotropic cutaneous B-cell lymphoma was based on histopathological morphology and case follow-up, and was supported immunohistochemically by CD79a positivity.",
"title": ""
},
{
"docid": "0cae8939c57ff3713d7321102c80816e",
"text": "In this paper, we propose using 3D Convolutional Neural Networks for large scale user-independent continuous gesture recognition. We have trained an end-to-end deep network for continuous gesture recognition (jointly learning both the feature representation and the classifier). The network performs three-dimensional (i.e. space-time) convolutions to extract features related to both the appearance and motion from volumes of color frames. Space-time invariance of the extracted features is encoded via pooling layers. The earlier stages of the network are partially initialized using the work of Tran et al. before being adapted to the task of gesture recognition. An earlier version of the proposed method, which was trained for 11,250 iterations, was submitted to ChaLearn 2016 Continuous Gesture Recognition Challenge and ranked 2nd with the Mean Jaccard Index Score of 0.269235. When the proposed method was further trained for 28,750 iterations, it achieved state-of-the-art performance on the same dataset, yielding a 0.314779 Mean Jaccard Index Score.",
"title": ""
},
{
"docid": "e44f67fec39390f215b5267c892d1a26",
"text": "Primary progressive aphasia (PPA) may be the onset of several neurodegenerative diseases. This study evaluates a cohort of patients with PPA to assess their progression to different clinical syndromes, associated factors that modulate this progression, and patterns of cerebral metabolism linked to different clinical evolutionary forms. Thirty-five patients meeting PPA criteria underwent a clinical and neuroimaging 18F-Fluorodeoxyglucose PET evaluation. Survival analysis was performed using time from clinical onset to the development of a non-language symptom or deficit (PPA-plus). Cerebral metabolism was analyzed using Statistical Parametric Mapping. Patients classified into three PPA variants evolved to atypical parkinsonism, behavioral disorder and motor neuron disease in the agrammatic variant; to behavioral disorder in the semantic; and to memory impairment in the logopenic. Median time from the onset of symptoms to PPA-plus was 36 months (31–40, 95 % confidence interval). Right laterality, and years of education were associated to a lower risk of progression, while logopenic variant to a higher risk. Different regions of hypometabolism were identified in agrammatic PPA with parkinsonism, motor neuron disease and logopenic PPA-plus. Clinical course of PPA differs according to each variant. Left anterior temporal and frontal medial hypometabolism in agrammatic variant is linked to motor neuron disease and atypical parkinsonism, respectively. PPA variant, laterality and education may be associated to the risk of progression. These results suggest the possibility that clinical and imaging data could help to predict the clinical course of PPA.",
"title": ""
}
] | scidocsrr |
ad1f409ebcef4ddcf9b58c6dd80771ef | Investigation of forecasting methods for the hourly spot price of the day-ahead electric power markets | [
{
"docid": "508eb69a9e6b0194fbda681439e404c4",
"text": "Price forecasting is becoming increasingly relevant to producers and consumers in the new competitive electric power markets. Both for spot markets and long-term contracts, price forecasts are necessary to develop bidding strategies or negotiation skills in order to maximize benefit. This paper provides a method to predict next-day electricity prices based on the ARIMA methodology. ARIMA techniques are used to analyze time series and, in the past, have been mainly used for load forecasting due to their accuracy and mathematical soundness. A detailed explanation of the aforementioned ARIMA models and results from mainland Spain and Californian markets are presented.",
"title": ""
}
] | [
{
"docid": "f85a8a7e11a19d89f2709cc3c87b98fc",
"text": "This paper presents novel store-and-forward packet routing algorithms for Wireless Body Area Networks (WBAN) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. On-body DTN routing protocols are then developed using a stochastic link cost formulation, capturing multi-scale topological localities in human postural movements. Performance of the proposed protocols are evaluated experimentally and via simulation, and are compared with a number of existing single-copy DTN routing protocols and an on-body packet flooding mechanism that serves as a performance benchmark with delay lower-bound. It is shown that via multi-scale modeling of the spatio-temporal locality of on-body link disconnection patterns, the proposed algorithms can provide better routing performance compared to a number of existing probabilistic, opportunistic, and utility-based DTN routing protocols in the literature.",
"title": ""
},
{
"docid": "d9aa5e0d687add02a6b31759c482489c",
"text": "This paper presents an accurate and fast algorithm for road segmentation using convolutional neural network (CNN) and gated recurrent units (GRU). For autonomous vehicles, road segmentation is a fundamental task that can provide the drivable area for path planning. The existing deep neural network based segmentation algorithms usually take a very deep encoder-decoder structure to fuse pixels, which requires heavy computations, large memory and long processing time. Hereby, a CNN-GRU network model is proposed and trained to perform road segmentation using data captured by the front camera of a vehicle. GRU network obtains a long spatial sequence with lower computational complexity, comparing to traditional encoderdecoder architecture. The proposed road detector is evaluated on the KITTI road benchmark and achieves high accuracy for road segmentation at real-time processing speed.",
"title": ""
},
{
"docid": "5b5d4c33a600d93b8b999a51318980da",
"text": "In this work, we focused on liveness detection for facial recognition system's spoofing via fake face movement. We have developed a pupil direction observing system for anti-spoofing in face recognition systems using a basic hardware equipment. Firstly, eye area is being extracted from real time camera by using Haar-Cascade Classifier with specially trained classifier for eye region detection. Feature points have extracted and traced for minimizing person's head movements and getting stable eye region by using Kanade-Lucas-Tomasi (KLT) algorithm. Eye area is being cropped from real time camera frame and rotated for a stable eye area. Pupils are extracted from eye area by using a new improved algorithm subsequently. After a few stable number of frames that has pupils, proposed spoofing algorithm selects a random direction and sends a signal to Arduino to activate that selected direction's LED on a square frame that has totally eight LEDs for each direction. After chosen LED has been activated, eye direction is observed whether pupil direction and LED's position matches. If the compliance requirement is satisfied, algorithm returns data that contains liveness information. Complete algorithm for liveness detection using pupil tracking is tested on volunteers and algorithm achieved high success ratio.",
"title": ""
},
{
"docid": "f25b9147e67bd8051852142ebd82cf20",
"text": "Fossil fuels currently supply most of the world's energy needs, and however unacceptable their long-term consequences, the supplies are likely to remain adequate for the next few generations. Scientists and policy makers must make use of this period of grace to assess alternative sources of energy and determine what is scientifically possible, environmentally acceptable and technologically promising.",
"title": ""
},
{
"docid": "2bc30693be1c5855a9410fb453128054",
"text": "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.",
"title": ""
},
{
"docid": "a737511620632ac8920a20d566c93974",
"text": "Hidradenitis suppurativa (HS) is an inflammatory skin disease. Several observations imply that sex hormones may play a role in its pathogenesis. HS is more common in women, and the disease severity appears to vary in intensity according to the menstrual cycle. In addition, parallels have been drawn between HS and acne vulgaris, suggesting that sex hormones may play a role in the condition. The role of androgens and estrogens in HS has therefore been explored in numerous observational and some interventional studies; however, the studies have often reported conflicting results. This systematic review includes 59 unique articles and aims to give an overview of the available research. Articles containing information on natural variation, severity changes during menstruation and pregnancy, as well as articles on serum levels of hormones in patients with HS and the therapeutic options of hormonal manipulation therapy have all been included and are presented in this systematic review. Our results show that patients with HS do not seem to have increased levels of sex hormones and that their hormone levels lie within the normal range. While decreasing levels of progesterone and estrogen seem to coincide with disease flares in premenopausal women, the association is speculative and requires experimental confirmation. Antiandrogen treatment could be a valuable approach in treating HS, however randomized control trials are lacking.",
"title": ""
},
{
"docid": "7e8723331aaec6b4f448030a579fa328",
"text": "With the recent trend toward more non extraction treatment, several appliances have been advocated to distalize molars in the upper arch. Certain principles, as outlined by Burstone, must be borne in mind when designing such an appliance:",
"title": ""
},
{
"docid": "3d81867b694a7fa56383583d9ee2637f",
"text": "Elasticity is undoubtedly one of the most striking characteristics of cloud computing. Especially in the area of high performance computing (HPC), elasticity can be used to execute irregular and CPU-intensive applications. However, the on- the-fly increase/decrease in resources is more widespread in Web systems, which have their own IaaS-level load balancer. Considering the HPC area, current approaches usually focus on batch jobs or assumptions such as previous knowledge of application phases, source code rewriting or the stop-reconfigure-and-go approach for elasticity. In this context, this article presents AutoElastic, a PaaS-level elasticity model for HPC in the cloud. Its differential approach consists of providing elasticity for high performance applications without user intervention or source code modification. The scientific contributions of AutoElastic are twofold: (i) an Aging-based approach to resource allocation and deallocation actions to avoid unnecessary virtual machine (VM) reconfigurations (thrashing) and (ii) asynchronism in creating and terminating VMs in such a way that the application does not need to wait for completing these procedures. The prototype evaluation using OpenNebula middleware showed performance gains of up to 26 percent in the execution time of an application with the AutoElastic manager. Moreover, we obtained low intrusiveness for AutoElastic when reconfigurations do not occur.",
"title": ""
},
{
"docid": "3fe5ea7769bfd7e7ea0adcb9ae497dcf",
"text": "Working memory emerges in infancy and plays a privileged role in subsequent adaptive cognitive development. The neural networks important for the development of working memory during infancy remain unknown. We used diffusion tensor imaging (DTI) and deterministic fiber tracking to characterize the microstructure of white matter fiber bundles hypothesized to support working memory in 12-month-old infants (n=73). Here we show robust associations between infants' visuospatial working memory performance and microstructural characteristics of widespread white matter. Significant associations were found for white matter tracts that connect brain regions known to support working memory in older children and adults (genu, anterior and superior thalamic radiations, anterior cingulum, arcuate fasciculus, and the temporal-parietal segment). Better working memory scores were associated with higher FA and lower RD values in these selected white matter tracts. These tract-specific brain-behavior relationships accounted for a significant amount of individual variation above and beyond infants' gestational age and developmental level, as measured with the Mullen Scales of Early Learning. Working memory was not associated with global measures of brain volume, as expected, and few associations were found between working memory and control white matter tracts. To our knowledge, this study is among the first demonstrations of brain-behavior associations in infants using quantitative tractography. The ability to characterize subtle individual differences in infant brain development associated with complex cognitive functions holds promise for improving our understanding of normative development, biomarkers of risk, experience-dependent learning and neuro-cognitive periods of developmental plasticity.",
"title": ""
},
{
"docid": "28e9bb0eef126b9969389068b6810073",
"text": "This paper presents the task specifications for designing a novel Insertable Robotic Effectors Platform (IREP) with integrated stereo vision and surgical intervention tools for Single Port Access Surgery (SPAS). This design provides a compact deployable mechanical architecture that may be inserted through a single Ø15 mm access port. Dexterous surgical intervention and stereo vision are achieved via the use of two snake-like continuum robots and two controllable CCD cameras. Simulations and dexterity evaluation of our proposed design are compared to several design alternatives with different kinematic arrangements. Results of these simulations show that dexterity is improved by using an independent revolute joint at the tip of a continuum robot instead of achieving distal rotation by transmission of rotation about the backbone of the continuum robot. Further, it is shown that designs with two robotic continuum robots as surgical arms have diminished dexterity if the bases of these arms are close to each other. This result justifies our design and points to ways of improving the performance of existing designs that use continuum robots as surgical arms.",
"title": ""
},
{
"docid": "768ed187f94163727afd011817a306c6",
"text": "Although interest regarding the role of dispositional affect in job behaviors has surged in recent years, the true magnitude of affectivity's influence remains unknown. To address this issue, the authors conducted a qualitative and quantitative review of the relationships between positive and negative affectivity (PA and NA, respectively) and various performance dimensions. A series of meta-analyses based on 57 primary studies indicated that PA and NA predicted task performance in the hypothesized directions and that the relationships were strongest for subjectively rated versus objectively rated performance. In addition, PA was related to organizational citizenship behaviors but not withdrawal behaviors, and NA was related to organizational citizenship behaviors, withdrawal behaviors, counterproductive work behaviors, and occupational injury. Mediational analyses revealed that affect operated through different mechanisms in influencing the various performance dimensions. Regression analyses documented that PA and NA uniquely predicted task performance but that extraversion and neuroticism did not, when the four were considered simultaneously. Discussion focuses on the theoretical and practical implications of these findings. (PsycINFO Database Record (c) 2009 APA, all rights reserved).",
"title": ""
},
{
"docid": "09fdc74a146a876e44bec1eca1bf7231",
"text": "With more and more people around the world learning Chinese as a second language, the need of Chinese error correction tools is increasing. In the HSK dynamic composition corpus, word usage error (WUE) is the most common error type. In this paper, we build a neural network model that considers both target erroneous token and context to generate a correction vector and compare it against a candidate vocabulary to propose suitable corrections. To deal with potential alternative corrections, the top five proposed candidates are judged by native Chinese speakers. For more than 91% of the cases, our system can propose at least one acceptable correction within a list of five candidates. To the best of our knowledge, this is the first research addressing general-type Chinese WUE correction. Our system can help non-native Chinese learners revise their sentences by themselves. Title and Abstract in Chinese",
"title": ""
},
{
"docid": "8db3f92e38d379ab5ba644ff7a59544d",
"text": "Within American psychology, there has been a recent surge of interest in self-compassion, a construct from Buddhist thought. Self-compassion entails: (a) being kind and understanding toward oneself in times of pain or failure, (b) perceiving one’s own suffering as part of a larger human experience, and (c) holding painful feelings and thoughts in mindful awareness. In this article we review findings from personality, social, and clinical psychology related to self-compassion. First, we define self-compassion and distinguish it from other self-constructs such as self-esteem, self-pity, and self-criticism. Next, we review empirical work on the correlates of self-compassion, demonstrating that self-compassion has consistently been found to be related to well-being. These findings support the call for interventions that can raise self-compassion. We then review the theory and empirical support behind current interventions that could enhance self-compassion including compassionate mind training (CMT), imagery work, the gestalt two-chair technique, mindfulness based stress reduction (MBSR), dialectical behavior therapy (DBT), and acceptance and commitment therapy (ACT). Directions for future research are also discussed.",
"title": ""
},
{
"docid": "b0ac318eea1dc5f6feb9fdaf5f554752",
"text": "In this paper an RSA calculation architecture is proposed for FPGAs that addresses the issues of scalability, flexible performance, and silicon efficiency for the hardware acceleration of Public Key crypto systems. Using techniques based around Montgomery math for exponentiation, the proposed RSA calculation architecture is compared to existing FPGA-based solutions for speed, FPGA utilisation, and scalability. The paper will cover the RSA encryption algorithm, Montgomery math, basic FPGA technology, and the implementation details of the proposed RSA calculation architecture. Conclusions will be drawn, beyond the singular improvements over existing architectures, which highlight the advantages of a fully flexible & parameterisable design.",
"title": ""
},
{
"docid": "3dbedb4539ac6438e9befbad366d1220",
"text": "The main focus of this paper is to propose integration of dynamic and multiobjective algorithms for graph clustering in dynamic environments under multiple objectives. The primary application is to multiobjective clustering in social networks which change over time. Social networks, typically represented by graphs, contain information about the relations (or interactions) among online materials (or people). A typical social network tends to expand over time, with newly added nodes and edges being incorporated into the existing graph. We reflect these characteristics of social networks based on real-world data, and propose a suitable dynamic multiobjective evolutionary algorithm. Several variants of the algorithm are proposed and compared. Since social networks change continuously, the immigrant schemes effectively used in previous dynamic optimisation give useful ideas for new algorithms. An adaptive integration of multiobjective evolutionary algorithms outperformed other algorithms in dynamic social networks.",
"title": ""
},
{
"docid": "21bd78306fc5f899553246e08e4f3c0e",
"text": "In this paper, we present the system we have used for the Implicit WASSA 2018 Implicit Emotion Shared Task. The task is to predict the emotion of a tweet of which the explicit mentions of emotion terms have been removed. The idea is to come up with a model which has the ability to implicitly identify the emotion expressed given the context words. We have used a Gated Recurrent Neural Network (GRU) and a Capsule Network based model for the task. Pre-trained word embeddings have been utilized to incorporate contextual knowledge about words into the model. GRU layer learns latent representations using the input word embeddings. Subsequent Capsule Network layer learns high-level features from that hidden representation. The proposed model managed to achieve a macro-F1 score of 0.692.",
"title": ""
},
{
"docid": "98f76e0ea0f028a1423e1838bdebdccb",
"text": "An operational-transconductance-amplifier (OTA) design for ultra-low voltage ultra-low power applications is proposed. The input stage of the proposed OTA utilizes a bulk-driven pseudo-differential pair to allow minimum supply voltage while achieving a rail-to-rail input range. All the transistors in the proposed OTA operate in the subthreshold region. Using a novel self-biasing technique to bias the OTA obviates the need for extra biasing circuitry and enhances the performance of the OTA. The proposed technique ensures the OTA robustness to process variations and increases design feasibility under ultra-low-voltage conditions. Moreover, the proposed biasing technique significantly improves the common-mode and power-supply rejection of the OTA. To further enhance the bandwidth and allow the use of smaller compensation capacitors, a compensation network based on a damping-factor control circuit is exploited. The OTA is fabricated in a 65 nm CMOS technology. Measurement results show that the OTA provides a low-frequency gain of 46 dB and rail-to-rail input common-mode range with a supply voltage as low as 0.5 V. The dc gain of the OTA is greater than 42 dB for supply voltage as low as 0.35 V. The power dissipation is 182 μW at VDD=0.5 V and 17 μW at VDD=0.35 V.",
"title": ""
},
{
"docid": "09cffaca68a254f591187776e911d36e",
"text": "Signaling across cellular membranes, the 826 human G protein-coupled receptors (GPCRs) govern a wide range of vital physiological processes, making GPCRs prominent drug targets. X-ray crystallography provided GPCR molecular architectures, which also revealed the need for additional structural dynamics data to support drug development. Here, nuclear magnetic resonance (NMR) spectroscopy with the wild-type-like A2A adenosine receptor (A2AAR) in solution provides a comprehensive characterization of signaling-related structural dynamics. All six tryptophan indole and eight glycine backbone 15N-1H NMR signals in A2AAR were individually assigned. These NMR probes provided insight into the role of Asp522.50 as an allosteric link between the orthosteric drug binding site and the intracellular signaling surface, revealing strong interactions with the toggle switch Trp 2466.48, and delineated the structural response to variable efficacy of bound drugs across A2AAR. The present data support GPCR signaling based on dynamic interactions between two semi-independent subdomains connected by an allosteric switch at Asp522.50.",
"title": ""
},
{
"docid": "45dfa7f6b1702942b5abfb8de920d1c2",
"text": "Loneliness is a common condition in older adults and is associated with increased morbidity and mortality, decreased sleep quality, and increased risk of cognitive decline. Assessing loneliness in older adults is challenging due to the negative desirability biases associated with being lonely. Thus, it is necessary to develop more objective techniques to assess loneliness in older adults. In this paper, we describe a system to measure loneliness by assessing in-home behavior using wireless motion and contact sensors, phone monitors, and computer software as well as algorithms developed to assess key behaviors of interest. We then present results showing the accuracy of the system in detecting loneliness in a longitudinal study of 16 older adults who agreed to have the sensor platform installed in their own homes for up to 8 months. We show that loneliness is significantly associated with both time out-of-home (β = -0.88 andp <; 0.01) and number of computer sessions (β = 0.78 and p <; 0.05). R2 for the model was 0.35. We also show the model's ability to predict out-of-sample loneliness, demonstrating that the correlation between true loneliness and predicted out-of-sample loneliness is 0.48. When compared with the University of California at Los Angeles loneliness score, the normalized mean absolute error of the predicted loneliness scores was 0.81 and the normalized root mean squared error was 0.91. These results represent first steps toward an unobtrusive, objective method for the prediction of loneliness among older adults, and mark the first time multiple objective behavioral measures that have been related to this key health outcome.",
"title": ""
}
] | scidocsrr |
aac72557a72dafd34c89144aad275b36 | Allen Brain Atlas: an integrated spatio-temporal portal for exploring the central nervous system | [
{
"docid": "99fd1d5111b9d58f8d370be7de5b003d",
"text": "Molecular approaches to understanding the functional circuitry of the nervous system promise new insights into the relationship between genes, brain and behaviour. The cellular diversity of the brain necessitates a cellular resolution approach towards understanding the functional genomics of the nervous system. We describe here an anatomically comprehensive digital atlas containing the expression patterns of ∼20,000 genes in the adult mouse brain. Data were generated using automated high-throughput procedures for in situ hybridization and data acquisition, and are publicly accessible online. Newly developed image-based informatics tools allow global genome-scale structural analysis and cross-correlation, as well as identification of regionally enriched genes. Unbiased fine-resolution analysis has identified highly specific cellular markers as well as extensive evidence of cellular heterogeneity not evident in classical neuroanatomical atlases. This highly standardized atlas provides an open, primary data resource for a wide variety of further studies concerning brain organization and function.",
"title": ""
},
{
"docid": "1f364472fcf7da9bfc18d9bb8a521693",
"text": "The Cre/lox system is widely used in mice to achieve cell-type-specific gene expression. However, a strong and universally responding system to express genes under Cre control is still lacking. We have generated a set of Cre reporter mice with strong, ubiquitous expression of fluorescent proteins of different spectra. The robust native fluorescence of these reporters enables direct visualization of fine dendritic structures and axonal projections of the labeled neurons, which is useful in mapping neuronal circuitry, imaging and tracking specific cell populations in vivo. Using these reporters and a high-throughput in situ hybridization platform, we are systematically profiling Cre-directed gene expression throughout the mouse brain in several Cre-driver lines, including new Cre lines targeting different cell types in the cortex. Our expression data are displayed in a public online database to help researchers assess the utility of various Cre-driver lines for cell-type-specific genetic manipulation.",
"title": ""
}
] | [
{
"docid": "b156acf3a04c8edd6e58c859009374d6",
"text": "Linked Open Data has been recognized as a valuable source for background information in data mining. However, most data mining tools require features in propositional form, i.e., a vector of nominal or numerical features associated with an instance, while Linked Open Data sources are graphs by nature. In this paper, we present RDF2Vec, an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs. We generate sequences by leveraging local information from graph substructures, harvested by Weisfeiler-Lehman Subtree RDF Graph Kernels and graph walks, and learn latent numerical representations of entities in RDF graphs. Our evaluation shows that such vector representations outperform existing techniques for the propositionalization of RDF graphs on a variety of different predictive machine learning tasks, and that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be easily reused for different tasks.",
"title": ""
},
{
"docid": "0c77e3923dfae2b31824ce1285e6d5fd",
"text": "1 ACKNOWLEDGEMENTS 2",
"title": ""
},
{
"docid": "6329341da2a7e0957f2abde7f98764f9",
"text": "\"Enterprise Information Portals are applications that enable companies to unlock internally and externally stored information, and provide users a single gateway to personalized information needed to make informed business decisions. \" They are: \". . . an amalgamation of software applications that consolidate, manage, analyze and distribute information across and outside of an enterprise (including Business Intelligence, Content Management, Data Warehouse & Mart and Data Management applications.)\"",
"title": ""
},
{
"docid": "5169d59af7f5cae888a998f891d99b18",
"text": "Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes), while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses). To meet both demands, some promising compromises of methodological solutions have been proposed-in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.",
"title": ""
},
{
"docid": "4050f76539d79edff962963625298ae2",
"text": "An economic evaluation of a hybrid wind/photovoltaic/fuel cell generation system for a typical home in the Pacific Northwest is performed. In this configuration the combination of a fuel cell stack, an electrolyzer, and a hydrogen storage tank is used for the energy storage system. This system is compared to a traditional hybrid energy system with battery storage. A computer program has been developed to size system components in order to match the load of the site in the most cost effective way. A cost of electricity and an overall system cost are also calculated for each configuration. The study was performed using a graphical user interface programmed in MATLAB.",
"title": ""
},
{
"docid": "940994951108186b57c88217ffda9c88",
"text": "A small phallus causes great concern regarding genital adequacy. A concealed penis, although of normal size, appears small either because it is buried in prepubic tissues, enclosed in scrotal tissue penis palmatus (PP), or trapped due to phimosis or a scar following circumcision or trauma. From July 1978 to January 2001 we operated upon 92 boys with concealed penises; 49 had buried penises (BP), while PP of varying degrees was noted in 14. Of 29 patients with a trapped penis, phimosis was noted in 9, post-circumcision cicatrix (PCC) in 17, radical circumcision in 2, and posttraumatic scarring in 1. The BP was corrected at 2–3 years of age by incising the inner prepuce circumferentially, degloving the penis to the penopubic junction, dividing dysgenetic bands, and suturing the dermis of the penopubic skin to Buck's fascia with nonabsorbable sutures. Patients with PP required displacement of the scrotum in addition to correction of the BP. Phimosis was treated by circumcision. Patients with a PCC were recircumcised carefully, preserving normal skin, but Z-plasties and Byars flaps were often required for skin coverage. After radical circumcision and trauma, vascularized flaps were raised to cover the defect. Satisfactory results were obtained in all cases although 2 patients with BP required a second operation. The operation required to correct a concealed penis has to be tailored to its etiology.",
"title": ""
},
{
"docid": "1ac76924d3fae2bbcb7f7b84f1c2ea5e",
"text": "This chapter studies ontology matching : the problem of finding the semantic mappings between two given ontologies. This problem lies at the heart of numerous information processing applications. Virtually any application that involves multiple ontologies must establish semantic mappings among them, to ensure interoperability. Examples of such applications arise in myriad domains, including e-commerce, knowledge management, e-learning, information extraction, bio-informatics, web services, and tourism (see Part D of this book on ontology applications). Despite its pervasiveness, today ontology matching is still largely conducted by hand, in a labor-intensive and error-prone process. The manual matching has now become a key bottleneck in building large-scale information management systems. The advent of technologies such as the WWW, XML, and the emerging Semantic Web will further fuel information sharing applications and exacerbate the problem. Hence, the development of tools to assist in the ontology matching process has become crucial for the success of a wide variety of information management applications. In response to the above challenge, we have developed GLUE, a system that employs learning techniques to semi-automatically create semantic mappings between ontologies. We shall begin the chapter by describing a motivating example: ontology matching on the Semantic Web. Then we present our GLUE solution. Finally, we describe a set of experiments on several real-world domains, and show that GLUE proposes highly accurate semantic mappings.",
"title": ""
},
{
"docid": "717e5a5b6026d42e7379d8e2c0c7ff45",
"text": "In this paper, a color image segmentation approach based on homogram thresholding and region merging is presented. The homogram considers both the occurrence of the gray levels and the neighboring homogeneity value among pixels. Therefore, it employs both the local and global information. Fuzzy entropy is utilized as a tool to perform homogram analysis for nding all major homogeneous regions at the rst stage. Then region merging process is carried out based on color similarity among these regions to avoid oversegmentation. The proposed homogram-based approach (HOB) is compared with the histogram-based approach (HIB). The experimental results demonstrate that the HOB can nd homogeneous regions more eeectively than HIB does, and can solve the problem of discriminating shading in color images to some extent.",
"title": ""
},
{
"docid": "3b9adb452f628a3cf5153b80f1977bc4",
"text": "Small signal stability analysis is conducted considering grid connected doubly-fed induction generator (DFIG) type. The modeling of a grid connected DFIG system is first set up and the whole model is formulated by a set of differential algebraic equations (DAE). Then, the mathematical model of rotor-side converter is built with decoupled P-Q control techniques to implement stator active and reactive powers control. Based on the abovementioned researches, the small signal stability analysis is carried out to explore and compared the differences between the whole system with the decoupled P-Q controller or not by eigenvalues and participation factors. Finally, numerical results demonstrate the system are stable, especially some conclusions and comments of interest are made. DFIG model; decoupled P-Q control; DAE; small signal analysis;",
"title": ""
},
{
"docid": "f651d8505f354fe0ad8e0866ca64e6e1",
"text": "Building on existing categorical accounts of natural language semantics, we propose a compositional distributional model of ambiguous meaning. Originally inspired by the high-level category theoretic language of quantum information protocols, the compositional, distributional categorical model provides a conceptually motivated procedure to compute the meaning of a sentence, given its grammatical structure and an empirical derivation of the meaning of its parts. Grammar is given a type-logical description in a compact closed category while the meaning of words is represented in a finite inner product space model. Since the category of finite-dimensional Hilbert spaces is also compact closed, the type-checking deduction process lifts to a concrete meaning-vector computation via a strong monoidal functor between the two categories. The advantage of reasoning with these structures is that grammatical composition admits an interpretation in terms of flow of meaning between words. Pushing the analogy with quantum mechanics further, we describe ambiguous words as statistical ensembles of unambiguous concepts and extend the semantics of the previous model to a category that supports probabilistic mixing. We introduce two different Frobenius algebras representing different ways of composing the meaning of words, and discuss their properties. We conclude with a range of applications to the case of definitions, including a meaning update rule that reconciles the meaning of an ambiguous word with that of its definition.",
"title": ""
},
{
"docid": "565dcf584448f6724a6529c3d2147a68",
"text": "People are fond of taking and sharing photos in their social life, and a large part of it is face images, especially selfies. A lot of researchers are interested in analyzing attractiveness of face images. Benefited from deep neural networks (DNNs) and training data, researchers have been developing deep learning models that can evaluate facial attractiveness of photos. However, recent development on DNNs showed that they could be easily fooled even when they are trained on a large dataset. In this paper, we used two approaches to generate adversarial examples that have high attractiveness scores but low subjective scores for face attractiveness evaluation on DNNs. In the first approach, experimental results using the SCUT-FBP dataset showed that we could increase attractiveness score of 20 test images from 2.67 to 4.99 on average (score range: [1, 5]) without noticeably changing the images. In the second approach, we could generate similar images from noise image with any target attractiveness score. Results show by using this approach, a part of attractiveness information could be manipulated artificially.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "88eeade777057cc154d28ef256fb3d87",
"text": "This paper focuses on the task of inserting punctuation symbols into transcribed conversational speech texts, without relying on prosodic cues. We investigate limitations associated with previous methods, and propose a novel approach based on dynamic conditional random fields. Different from previous work, our proposed approach is designed to jointly perform both sentence boundary and sentence type prediction, and punctuation prediction on speech utterances. We performed evaluations on a transcribed conversational speech domain consisting of both English and Chinese texts. Empirical results show that our method outperforms an approach based on linear-chain conditional random fields and other previous approaches.",
"title": ""
},
{
"docid": "7b220c4e424abd4c6a724c7d0b45c0f4",
"text": "Text in video is a very compact and accurate clue for video indexing and summarization. Most video text detection and extraction methods hold assumptions on text color, background contrast, and font style. Moreover, few methods can handle multilingual text well since different languages may have quite different appearances. This paper performs a detailed analysis of multilingual text characteristics, including English and Chinese. Based on the analysis, we propose a comprehensive, efficient video text detection, localization, and extraction method, which emphasizes the multilingual capability over the whole processing. The proposed method is also robust to various background complexities and text appearances. The text detection is carried out by edge detection, local thresholding, and hysteresis edge recovery. The coarse-to-fine localization scheme is then performed to identify text regions accurately. The text extraction consists of adaptive thresholding, dam point labeling, and inward filling. Experimental results on a large number of video images and comparisons with other methods are reported in detail.",
"title": ""
},
{
"docid": "ccac025250d397a5bcc6a5f847d2cc81",
"text": "With the widespread clinical use of comparative genomic hybridization chromosomal microarray technology, several previously unidentified clinically significant submicroscopic chromosome abnormalities have been discovered. Specifically, there have been reports of clinically significant microduplications found in regions of known microdeletion syndromes. In general, these microduplications have distinct features from those described in the corresponding microdeletion syndromes. We present a 5½-year-old patient with normal growth, borderline normal IQ, borderline hypertelorism, and speech and language delay who was found to have a submicroscopic 2.3 Mb terminal duplication involving the two proposed Wolf-Hirschhorn syndrome (WHS) critical regions at chromosome 4p16.3. This duplication was the result of a maternally inherited reciprocal translocation involving the breakpoints 4p16.3 and 17q25.3. Our patient's features are distinct from those described in WHS and are not as severe as those described in partial trisomy 4p. There are two other patients in the medical literature with 4p16.3 microduplications of similar size also involving the WHS critical regions. Our patient shows clinical overlap with these two patients, although overall her features are milder than what has been previously described. Our patient's features expand the knowledge of the clinical phenotype of a 4p16.3 microduplication and highlight the need for further information about it.",
"title": ""
},
{
"docid": "50e3052f48fccda7e404f13f60f14048",
"text": "BACKGROUND\nMany procedures have been described for surgical treatment of symptomatic hallux rigidus. Dorsal cheilectomy of the metatarsophalangeal joint combined with a dorsal-based closing wedge osteotomy of the proximal phalanx (i.e., Moberg procedure) has been described as an effective procedure. For patients with hallux rigidus and clinically significant hallux valgus interphalangeus, the authors previously described a dorsal cheilectomy combined with a biplanar closing wedge osteotomy of the proximal phalanx, combining a Moberg osteotomy with an Akin osteotomy. The purpose of this study was to describe the clinical results of this procedure.\n\n\nMETHODS\nThis article is a retrospective review of prospectively gathered data that reports the clinical and radiographic results of dorsal cheilectomy combined with a biplanar oblique closing wedge proximal phalanx osteotomy (i.e., Moberg-Akin procedure) for patients with symptomatic hallux rigidus and hallux valgus interphalangeus. Consecutive patients were followed and evaluated for clinical and radiographic healing, satisfaction, and ultimate need for additional procedure(s). Thirty-five feet in 34 patients underwent the procedure.\n\n\nRESULTS\nAll osteotomies healed. At an average of 22.5 months of follow-up, 90% of patients reported good or excellent results, with pain relief, improved function, and fewer shoe wear limitations following this procedure. Hallux valgus and hallux interphalangeal angles were radiographically improved. Other than one patient who requested hardware removal, no patients required additional surgical procedures.\n\n\nCONCLUSIONS\nDorsal cheilectomy combined with a Moberg-Akin procedure was an effective and durable procedure with minimal morbidity in patients with hallux rigidus combined with hallux valgus interphalangeus.",
"title": ""
},
{
"docid": "e5b543b8880ec436874bee6b03a58618",
"text": "This paper outlines my concerns with Qualitative Data Analysis’ (QDA) numerous remodelings of Grounded Theory (GT) and the subsequent eroding impact. I cite several examples of the erosion and summarize essential elements of classic GT methodology. It is hoped that the article will clarify my concerns with the continuing enthusiasm but misunderstood embrace of GT by QDA methodologists and serve as a preliminary guide to novice researchers who wish to explore the fundamental principles of GT.",
"title": ""
},
{
"docid": "566412870c83e5e44fabc50487b9d994",
"text": "The influence of technology in the field of gambling innovation continues to grow at a rapid pace. After a brief overview of gambling technologies and deregulation issues, this review examines the impact of technology on gambling by highlighting salient factors in the rise of Internet gambling (i.e., accessibility, affordability, anonymity, convenience, escape immersion/dissociation, disinhibition, event frequency, asociability, interactivity, and simulation). The paper also examines other factors in relation to Internet gambling including the relationship between Internet addiction and Internet gambling addiction. The paper ends by overviewing some of the social issues surrounding Internet gambling (i.e., protection of the vulnerable, Internet gambling in the workplace, electronic cash, and unscrupulous operators). Recommendations for Internet gambling operators are also provided.",
"title": ""
},
{
"docid": "f5c6f4d125ebe557367bdb404c3094fb",
"text": "In this paper, we present a Chinese event extraction system. We point out a language specific issue in Chinese trigger labeling, and then commit to discussing the contributions of lexical, syntactic and semantic features applied in trigger labeling and argument labeling. As a result, we achieved competitive performance, specifically, F-measure of 59.9 in trigger labeling and F-measure of 43.8 in argument labeling.",
"title": ""
},
{
"docid": "e11f4e9188d5421b669fd7b3b78fcf18",
"text": "In this paper, we tackle the problem of segmenting out a sequence of actions from videos. The videos contain background and actions which are usually composed of ordered sub-actions. We refer the sub-actions and the background as semantic units. Considering the possible overlap between two adjacent semantic units, we propose a bidirectional sliding window method to generate the label distributions for various segments in the video. The label distribution covers a certain number of semantic unit labels, representing the degree to which each label describes the video segment. The mapping from a video segment to its label distribution is then learned by a Label Distribution Learning (LDL) algorithm. Based on the LDL model, a soft video parsing method with segmental regular grammars is proposed to construct a tree structure for the video. Each leaf of the tree stands for a video clip of background or sub-action. The proposed method shows promising results on the THUMOS’14, MSR-II and UCF101 datasets and its computational complexity is much less than the compared state-of-the-art video parsing method.",
"title": ""
}
] | scidocsrr |
ed6afeb80b8b3da85c6d8fa09b6871a3 | Using Pivots to Speed-Up k-Medoids Clustering | [
{
"docid": "1c5f53fe8d663047a3a8240742ba47e4",
"text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.",
"title": ""
}
] | [
{
"docid": "674339928a16b372fb13395f920561e5",
"text": "High-speed, high-efficiency photodetectors play an important role in optical communication links that are increasingly being used in data centres to handle higher volumes of data traffic and higher bandwidths, as big data and cloud computing continue to grow exponentially. Monolithic integration of optical components with signal-processing electronics on a single silicon chip is of paramount importance in the drive to reduce cost and improve performance. We report the first demonstration of microand nanoscale holes enabling light trapping in a silicon photodiode, which exhibits an ultrafast impulse response (full-width at half-maximum) of 30 ps and a high efficiency of more than 50%, for use in data-centre optical communications. The photodiode uses microand nanostructured holes to enhance, by an order of magnitude, the absorption efficiency of a thin intrinsic layer of less than 2 μm thickness and is designed for a data rate of 20 gigabits per second or higher at a wavelength of 850 nm. Further optimization can improve the efficiency to more than 70%.",
"title": ""
},
{
"docid": "590a44ab149b88e536e67622515fdd08",
"text": "Chitosan is considered to be one of the most promising and applicable materials in adsorption applications. The existence of amino and hydroxyl groups in its molecules contributes to many possible adsorption interactions between chitosan and pollutants (dyes, metals, ions, phenols, pharmaceuticals/drugs, pesticides, herbicides, etc.). These functional groups can help in establishing positions for modification. Based on the learning from previously published works in literature, researchers have achieved a modification of chitosan with a number of different functional groups. This work summarizes the published works of the last three years (2012-2014) regarding the modification reactions of chitosans (grafting, cross-linking, etc.) and their application to adsorption of different environmental pollutants (in liquid-phase).",
"title": ""
},
{
"docid": "7eebeb133a9881e69bf3c367b9e20751",
"text": "Advanced driver assistance systems or highly automated driving systems for lane change maneuvers are expected to enhance highway traffic safety, transport efficiency, and driver comfort. To extend the capability of current advanced driver assistance systems, and eventually progress to highly automated highway driving, the task of automatically determine if, when, and how to perform a lane change maneuver, is essential. This paper thereby presents a low-complexity lane change maneuver algorithm which determines whether a lane change maneuver is desirable, and if so, selects an appropriate inter-vehicle traffic gap and time instance to perform the maneuver, and calculates the corresponding longitudinal and lateral control trajectory. The ability of the proposed lane change maneuver algorithm to make appropriate maneuver decisions and generate smooth and safe lane change trajectories in various traffic situations is demonstrated by simulation and experimental results.",
"title": ""
},
{
"docid": "b56cd1e9392976f48dddf7d3a60c5aef",
"text": "This paper presents a novel single-switch converter with high voltage gain and low voltage stress for photovoltaic applications. The proposed converter is composed of coupled-inductor and switched-capacitor techniques to achieve high step-up conversion ratio without adopting extremely high duty ratio or high turns ratio. The capacitors are charged in parallel and discharged in series by the coupled inductor to achieve high step-up voltage gain with an appropriate duty ratio. Besides, the voltage stress on the main switch is reduced with a passive clamp circuit, and the conduction losses are reduced. In addition, the reverse-recovery problem of the diode is alleviated by a coupled inductor. Thus, the efficiency can be further improved. The operating principle, steady state analysis and design of the proposed single switch converter with high step-up gain is carried out. A 24 V input voltage, 400 V output, and 300W maximum output power integrated converter is designed and analysed using MATLAB simulink. Simulation result proves the performance and functionality of the proposed single switch DC-DC converter for validation.",
"title": ""
},
{
"docid": "7db555e42bff7728edb8fb199f063cba",
"text": "The need for more post-secondary students to major and graduate in STEM fields is widely recognized. Students' motivation and strategic self-regulation have been identified as playing crucial roles in their success in STEM classes. But, how students' strategy use, self-regulation, knowledge building, and engagement impact different learning outcomes is not well understood. Our goal in this study was to investigate how motivation, strategic self-regulation, and creative competency were associated with course achievement and long-term learning of computational thinking knowledge and skills in introductory computer science courses. Student grades and long-term retention were positively associated with self-regulated strategy use and knowledge building, and negatively associated with lack of regulation. Grades were associated with higher study effort and knowledge retention was associated with higher study time. For motivation, higher learning- and task-approach goal orientations, endogenous instrumentality, and positive affect and lower learning-, task-, and performance-avoid goal orientations, exogenous instrumentality and negative affect were associated with higher grades and knowledge retention and also with strategic self-regulation and engagement. Implicit intelligence beliefs were associated with strategic self-regulation, but not grades or knowledge retention. Creative competency was associated with knowledge retention, but not grades, and with higher strategic self-regulation. Implications for STEM education are discussed.",
"title": ""
},
{
"docid": "2a7bd6fbce4fef6e319664090755858d",
"text": "AIM\nThis paper is a report of a study conducted to determine which occupational stressors are present in nurses' working environment; to describe and compare occupational stress between two educational groups of nurses; to estimate which stressors and to what extent predict nurses' work ability; and to determine if educational level predicts nurses' work ability.\n\n\nBACKGROUND\nNurses' occupational stress adversely affects their health and nursing quality. Higher educational level has been shown to have positive effects on the preservation of good work ability.\n\n\nMETHOD\nA cross-sectional study was conducted in 2006-2007. Questionnaires were distributed to a convenience sample of 1392 (59%) nurses employed at four university hospitals in Croatia (n = 2364). The response rate was 78% (n = 1086). Data were collected using the Occupational Stress Assessment Questionnaire and Work Ability Index Questionnaire.\n\n\nFINDINGS\nWe identified six major groups of occupational stressors: 'Organization of work and financial issues', 'public criticism', 'hazards at workplace', 'interpersonal conflicts at workplace', 'shift work' and 'professional and intellectual demands'. Nurses with secondary school qualifications perceived Hazards at workplace and Shift work as statistically significantly more stressful than nurses a with college degree. Predictors statistically significantly related with low work ability were: Organization of work and financial issues (odds ratio = 1.69, 95% confidence interval 122-236), lower educational level (odds ratio = 1.69, 95% confidence interval 122-236) and older age (odds ratio = 1.07, 95% confidence interval 1.05-1.09).\n\n\nCONCLUSION\nHospital managers should develop strategies to address and improve the quality of working conditions for nurses in Croatian hospitals. Providing educational and career prospects can contribute to decreasing nurses' occupational stress levels, thus maintaining their work ability.",
"title": ""
},
{
"docid": "e31901738e78728a7376457f7d1acd26",
"text": "Feature selection plays a critical role in biomedical data mining, driven by increasing feature dimensionality in target problems and growing interest in advanced but computationally expensive methodologies able to model complex associations. Specifically, there is a need for feature selection methods that are computationally efficient, yet sensitive to complex patterns of association, e.g. interactions, so that informative features are not mistakenly eliminated prior to downstream modeling. This paper focuses on Relief-based algorithms (RBAs), a unique family of filter-style feature selection algorithms that have gained appeal by striking an effective balance between these objectives while flexibly adapting to various data characteristics, e.g. classification vs. regression. First, this work broadly examines types of feature selection and defines RBAs within that context. Next, we introduce the original Relief algorithm and associated concepts, emphasizing the intuition behind how it works, how feature weights generated by the algorithm can be interpreted, and why it is sensitive to feature interactions without evaluating combinations of features. Lastly, we include an expansive review of RBA methodological research beyond Relief and its popular descendant, ReliefF. In particular, we characterize branches of RBA research, and provide comparative summaries of RBA algorithms including contributions, strategies, functionality, time complexity, adaptation to key data characteristics, and software availability.",
"title": ""
},
{
"docid": "0a5ae1eb45404d6a42678e955c23116c",
"text": "This study assessed the validity of the Balance Scale by examining: how Scale scores related to clinical judgements and self-perceptions of balance, laboratory measures of postural sway and external criteria reflecting balancing ability; if scores could predict falls in the elderly; and how they related to motor and functional performance in stroke patients. Elderly residents (N = 113) were assessed for functional performance and balance regularly over a nine-month period. Occurrence of falls was monitored for a year. Acute stroke patients (N = 70) were periodically rated for functional independence, motor performance and balance for over three months. Thirty-one elderly subjects were assessed by clinical and laboratory indicators reflecting balancing ability. The Scale correlated moderately with caregiver ratings, self-ratings and laboratory measures of sway. Differences in mean Scale scores were consistent with the use of mobility aids by elderly residents and differentiated stroke patients by location of follow-up. Balance scores predicted the occurrence of multiple falls among elderly residents and were strongly correlated with functional and motor performance in stroke patients.",
"title": ""
},
{
"docid": "fcfe75abfde3edbf051ccb78387c3904",
"text": "In this paper a Fuzzy Logic Controller (FLC) for path following of a four-wheel differentially skid steer mobile robot is presented. Fuzzy velocity and fuzzy torque control of the mobile robot is compared with classical controllers. To assess controllers robot kinematics and dynamics are simulated with parameters of P2-AT mobile robot. Results demonstrate the better performance of fuzzy logic controllers in following a predefined path.",
"title": ""
},
{
"docid": "54001ce62d0b571be9fbaf0980aa1b70",
"text": "Due to the large increase of malware samples in the last 10 years, the demand of the antimalware industry for an automated classifier has increased. However, this classifier has to satisfy two restrictions in order to be used in real life situations: high detection rate and very low number of false positives. By modifying the perceptron algorithm and combining existing features, we were able to provide a good solution to the problem, called the one side perceptron. Since the power of the perceptron lies in its features, we will focus our study on improving the feature creation algorithm. This paper presents different methods, including simple mathematical operations and the usage of a restricted Boltzmann machine, for creating features designed for an increased detection rate of the one side perceptron. The analysis is carried out using a large dataset of approximately 3 million files.",
"title": ""
},
{
"docid": "d32887dfac583ed851f607807c2f624e",
"text": "For a through-wall ultrawideband (UWB) random noise radar using array antennas, subtraction of successive frames of the cross-correlation signals between each received element signal and the transmitted signal is able to isolate moving targets in heavy clutter. Images of moving targets are subsequently obtained using the back projection (BP) algorithm. This technique is not constrained to noise radar, but can also be applied to other kinds of radar systems. Different models based on the finite-difference time-domain (FDTD) algorithm are set up to simulate different through-wall scenarios of moving targets. Simulation results show that the heavy clutter is suppressed, and the signal-to-clutter ratio (SCR) is greatly enhanced using this approach. Multiple moving targets can be detected, localized, and tracked for any random movement.",
"title": ""
},
{
"docid": "44402fdc3c9f2c6efaf77a00035f38ad",
"text": "A multi-objective optimization strategy to find optimal designs of composite multi-rim flywheel rotors is presented. Flywheel energy storage systems have been expanding into applications such as rail and automotive transportation, where the construction volume is limited. Common flywheel rotor optimization approaches for these applications are single-objective, aiming to increase the stored energy or stored energy density. The proposed multi-objective optimization offers more information for decision-makers optimizing three objectives separately: stored energy, cost and productivity. A novel approach to model the manufacturing of multi-rim composite rotors facilitates the consideration of manufacturing cost and time within the optimization. An analytical stress calculation for multi-rim rotors is used, which also takes interference fits and residual stresses into account. Constrained by a failure prediction based on the Maximum Strength, Maximum Strain and Tsai-Wu criterion, the discrete and nonlinear optimization was solved. A hybrid optimization strategy is presented that combines a genetic algorithm with a local improvement executed by a sequential quadratic program. The problem was solved for two rotor geometries used for light rail transit applications showing similar design results as in industry.",
"title": ""
},
{
"docid": "9f9268761bd2335303cfe2797d7e9eaa",
"text": "CYBER attacks have risen in recent times. The attack on Sony Pictures by hackers, allegedly from North Korea, has caught worldwide attention. The President of the United States of America issued a statement and “vowed a US response after North Korea’s alleged cyber-attack”.This dangerous malware termed “wiper” could overwrite data and stop important execution processes. An analysis by the FBI showed distinct similarities between this attack and the code used to attack South Korea in 2013, thus confirming that hackers re-use code from already existing malware to create new variants. This attack along with other recently discovered attacks such as Regin, Opcleaver give one clear message: current cyber security defense mechanisms are not sufficient enough to thwart these sophisticated attacks. Today’s defense mechanisms are based on scanning systems for suspicious or malicious activity. If such an activity is found, the files under suspect are either quarantined or the vulnerable system is patched with an update. These scanning methods are based on a variety of techniques such as static analysis, dynamic analysis and other heuristics based techniques, which are often slow to react to new attacks and threats. Static analysis is based on analyzing an executable without executing it, while dynamic analysis executes the binary and studies its behavioral characteristics. Hackers are familiar with these standard methods and come up with ways to evade the current defense mechanisms. They produce new malware variants that easily evade the detection methods. These variants are created from existing malware using inexpensive easily available “factory toolkits” in a “virtual factory” like setting, which then spread over and infect more systems. Once a system is compromised, it either quickly looses control and/or the infection spreads to other networked systems. While security techniques constantly evolve to keep up with new attacks, hackers too change their ways and continue to evade defense mechanisms. As this never-ending billion dollar “cat and mouse game” continues, it may be useful to look at avenues that can bring in novel alternative and/or orthogonal defense approaches to counter the ongoing threats. The hope is to catch these new attacks using orthogonal and complementary methods which may not be well known to hackers, thus making it more difficult and/or expensive for them to evade all detection schemes. This paper focuses on such orthogonal approaches from Signal and Image Processing that complement standard approaches.",
"title": ""
},
{
"docid": "7f5af3806f0baa040a26f258944ad3f9",
"text": "Linear Discriminant Analysis (LDA) is a widely-used supervised dimensionality reduction method in computer vision and pattern recognition. In null space based LDA (NLDA), a well-known LDA extension, between-class distance is maximized in the null space of the within-class scatter matrix. However, there are some limitations in NLDA. Firstly, for many data sets, null space of within-class scatter matrix does not exist, thus NLDA is not applicable to those datasets. Secondly, NLDA uses arithmetic mean of between-class distances and gives equal consideration to all between-class distances, which makes larger between-class distances can dominate the result and thus limits the performance of NLDA. In this paper, we propose a harmonic mean based Linear Discriminant Analysis, Multi-Class Discriminant Analysis (MCDA), for image classification, which minimizes the reciprocal of weighted harmonic mean of pairwise between-class distance. More importantly, MCDA gives higher priority to maximize small between-class distances. MCDA can be extended to multi-label dimension reduction. Results on 7 single-label data sets and 4 multi-label data sets show that MCDA has consistently better performance than 10 other single-label approaches and 4 other multi-label approaches in terms of classification accuracy, macro and micro average F1 score.",
"title": ""
},
{
"docid": "97691304930a85066a15086877473857",
"text": "In the context of modern cryptosystems, a common theme is the creation of distributed trust networks. In most of these designs, permanent storage of a contract is required. However, permanent storage can become a major performance and cost bottleneck. As a result, good code compression schemes are a key factor in scaling these contract based cryptosystems. For this project, we formalize and implement a data structure called the Merkelized Abstract Syntax Tree (MAST) to address both data integrity and compression. MASTs can be used to compactly represent contractual programs that will be executed remotely, and by using some of the properties of Merkle trees, they can also be used to verify the integrity of the code being executed. A concept by the same name has been discussed in the Bitcoin community for a while, the terminology originates from the work of Russel O’Connor and Pieter Wuille, however this discussion was limited to private correspondences. We present a formalization of it and provide an implementation.The project idea was developed with Bitcoin applications in mind, and the experiment we set up uses MASTs in a crypto currency network simulator. Using MASTs in the Bitcoin protocol [2] would increase the complexity (length) of contracts permitted on the network, while simultaneously maintaining the security of broadcasted data. Additionally, contracts may contain privileged, secret branches of execution.",
"title": ""
},
{
"docid": "097879c593aa68602564c176b806a74b",
"text": "We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions. Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.",
"title": ""
},
{
"docid": "0ccf20f28baf8a11c78d593efb9f6a52",
"text": "From a traction application point of view, proper operation of the synchronous reluctance motor over a wide speed range and mechanical robustness is desired. This paper presents new methods to improve the rotor mechanical integrity and the flux weakening capability at high speed using geometrical and variable ampere-turns concepts. The results from computer-aided analysis and experiment are compared to evaluate the methods. It is shown that, to achieve a proper design at high speed, the magnetic and mechanical performances need to be simultaneously analyzed due to their mutual effect.",
"title": ""
},
{
"docid": "f1cfb30b328725121ed232381d43ac3a",
"text": "High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g. those that require detecting objects from video streams in real time. The key to this problem is to trade accuracy for efficiency in an effective way, i.e. reducing the computing cost while maintaining competitive performance. To seek a good balance, previous efforts usually focus on optimizing the model architectures. This paper explores an alternative approach, that is, to reallocate the computation over a scale-time space. The basic idea is to perform expensive detection sparsely and propagate the results across both scales and time with substantially cheaper networks, by exploiting the strong correlations among them. Specifically, we present a unified framework that integrates detection, temporal propagation, and across-scale refinement on a Scale-Time Lattice. On this framework, one can explore various strategies to balance performance and cost. Taking advantage of this flexibility, we further develop an adaptive scheme with the detector invoked on demand and thus obtain improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a competitive mAP 79.6% at 20 fps, or 79.0% at 62 fps as a performance/speed tradeoff.1",
"title": ""
},
{
"docid": "a41d40d8349c1071c6f532b6b8e11be3",
"text": "A novel wideband slotline antenna is proposed using the multimode resonance concept. By symmetrically introducing two slot stubs along the slotline radiator near the nulls of electric-field distribution of the second odd-order mode, two radiation modes are excited in a single slotline resonator. With the help of the two stubs, the second odd-order mode gradually merges with its first counterpart and results into a wideband radiation with two resonances. Prototype antennas are then fabricated to experimentally validate the principle and design approach of the proposed slotline antenna. It is shown that the proposed slotline antenna's impedance bandwidth could be effectively increased to 32.7% while keeping an inherent narrow slot structure.",
"title": ""
},
{
"docid": "ebe91d4e3559439af5dd729e7321883d",
"text": "Performance of data analytics in Internet of Things (IoTs) depends on effective transport services offered by the underlying network. Fog computing enables independent data-plane computational features at the edge-switches, which serves as a platform for performing certain critical analytics required at the IoT source. To this end, in this paper, we implement a working prototype of Fog computing node based on Software-Defined Networking (SDN). Message Queuing Telemetry Transport (MQTT) is chosen as the candidate IoT protocol that transports data generated from IoT devices (a:k:a: MQTT publishers) to a remote host (called MQTT broker). We implement the MQTT broker functionalities integrated at the edge-switches, that serves as a platform to perform simple message-based analytics at the switches, and also deliver messages in a reliable manner to the end-host for post-delivery analytics. We mathematically validate the improved delivery performance as offered by the proposed switch-embedded brokers.",
"title": ""
}
] | scidocsrr |
0a81286afb381a9f6e2825a03f13265d | Prediction of long-term clinical outcomes using simple functional exercise performance tests in patients with COPD: a 5-year prospective cohort study | [
{
"docid": "0dc0815505f065472b3929792de638b4",
"text": "Our aim was to comprehensively validate the 1-min sit-to-stand (STS) test in chronic obstructive pulmonary disease (COPD) patients and explore the physiological response to the test.We used data from two longitudinal studies of COPD patients who completed inpatient pulmonary rehabilitation programmes. We collected 1-min STS test, 6-min walk test (6MWT), health-related quality of life, dyspnoea and exercise cardiorespiratory data at admission and discharge. We assessed the learning effect, test-retest reliability, construct validity, responsiveness and minimal important difference of the 1-min STS test.In both studies (n=52 and n=203) the 1-min STS test was strongly correlated with the 6MWT at admission (r=0.59 and 0.64, respectively) and discharge (r=0.67 and 0.68, respectively). Intraclass correlation coefficients (95% CI) between 1-min STS tests were 0.93 (0.83-0.97) for learning effect and 0.99 (0.97-1.00) for reliability. Standardised response means (95% CI) were 0.87 (0.58-1.16) and 0.91 (0.78-1.07). The estimated minimal important difference was three repetitions. End-exercise oxygen consumption, carbon dioxide output, ventilation, breathing frequency and heart rate were similar in the 1-min STS test and 6MWT.The 1-min STS test is a reliable, valid and responsive test for measuring functional exercise capacity in COPD patients and elicited a physiological response comparable to that of the 6MWT.",
"title": ""
}
] | [
{
"docid": "b25379a7a48ef2b6bcc2df8d84d7680b",
"text": "Microblogging (Twitter or Facebook) has become a very popular communication tool among Internet users in recent years. Information is generated and managed through either computer or mobile devices by one person and is consumed by many other persons, with most of this user-generated content being textual information. As there are a lot of raw data of people posting real time messages about their opinions on a variety of topics in daily life, it is a worthwhile research endeavor to collect and analyze these data, which may be useful for users or managers to make informed decisions, for example. However this problem is challenging because a micro-blog post is usually very short and colloquial, and traditional opinion mining algorithms do not work well in such type of text. Therefore, in this paper, we propose a new system architecture that can automatically analyze the sentiments of these messages. We combine this system with manually annotated data from Twitter, one of the most popular microblogging platforms, for the task of sentiment analysis. In this system, machines can learn how to automatically extract the set of messages which contain opinions, filter out nonopinion messages and determine their sentiment directions (i.e. positive, negative). Experimental results verify the effectiveness of our system on sentiment analysis in real microblogging applications.",
"title": ""
},
{
"docid": "2bba03660a752f7033e8ecd95eb6bdbd",
"text": "Crowdsensing has the potential to support human-driven sensing and data collection at an unprecedented scale. While many organizers of data collection campaigns may have extensive domain knowledge, they do not necessarily have the skills required to develop robust software for crowdsensing. In this paper, we present Mobile Campaign Designer, a tool that simplifies the creation of mobile crowdsensing applications. Using Mobile Campaign Designer, an organizer is able to define parameters about their crowdsensing campaign, and the tool generates the source code and an executable for a tailored mobile application that embodies the current best practices in crowdsensing. An evaluation of the tool shows that users at all levels of technical expertise are capable of creating a crowdsensing application in an average of five minutes, and the generated applications are comparable in quality to existing crowdsensing applications.",
"title": ""
},
{
"docid": "125259c4471d4250214fec50b5e97522",
"text": "The switched reluctance motor (SRM) is a promising drive solution for electric vehicle propulsion thanks to its simple, rugged structure, satisfying performance and low price. Among other SRMs, the axial flux SRM (AFSRM) is a strong candidate for in-wheel drive applications because of its high torque/power density and compact disc shape. In this paper, a four-phase 8-stator-pole 6-rotor-pole double-rotor AFSRM is investigated for an e-bike application. A series of analyses are conducted to reduce the torque ripple by shaping the rotor poles, and a multi-level air gap geometry is designed with specific air gap dimensions at different positions. Both static and dynamic analyses show significant torque ripple reduction while maintaining the average electromagnetic output torque at the demanded level.",
"title": ""
},
{
"docid": "78f4ac2d266d64646a7d9bc735257f9d",
"text": "To achieve dynamic inference in pixel labeling tasks, we propose Pixel-wise Attentional Gating (PAG), which learns to selectively process a subset of spatial locations at each layer of a deep convolutional network. PAG is a generic, architecture-independent, problem-agnostic mechanism that can be readily “plugged in” to an existing model with fine-tuning. We utilize PAG in two ways: 1) learning spatially varying pooling fields that improve model performance without the extra computation cost associated with multi-scale pooling, and 2) learning a dynamic computation policy for each pixel to decrease total computation (FLOPs) while maintaining accuracy. We extensively evaluate PAG on a variety of per-pixel labeling tasks, including semantic segmentation, boundary detection, monocular depth and surface normal estimation. We demonstrate that PAG allows competitive or state-ofthe-art performance on these tasks. Our experiments show that PAG learns dynamic spatial allocation of computation over the input image which provides better performance trade-offs compared to related approaches (e.g., truncating deep models or dynamically skipping whole layers). Generally, we observe PAG can reduce computation by 10% without noticeable loss in accuracy and performance degrades gracefully when imposing stronger computational constraints.",
"title": ""
},
{
"docid": "f53f739dd526e3f954aabded123f0710",
"text": "Successful Free/Libre Open Source Software (FLOSS) projects must attract and retain high-quality talent. Researchers have invested considerable effort in the study of core and peripheral FLOSS developers. To this point, one critical subset of developers that have not been studied are One-Time code Contributors (OTC) – those that have had exactly one patch accepted. To understand why OTCs have not contributed another patch and provide guidance to FLOSS projects on retaining OTCs, this study seeks to understand the impressions, motivations, and barriers experienced by OTCs. We conducted an online survey of OTCs from 23 popular FLOSS projects. Based on the 184 responses received, we observed that OTCs generally have positive impressions of their FLOSS project and are driven by a variety of motivations. Most OTCs primarily made contributions to fix bugs that impeded their work and did not plan on becoming long term contributors. Furthermore, OTCs encounter a number of barriers that prevent them from continuing to contribute to the project. Based on our findings, there are some concrete actions FLOSS projects can take to increase the chances of converting OTCs into long-term contributors.",
"title": ""
},
{
"docid": "21916d34fb470601fb6376c4bcd0839a",
"text": "BACKGROUND\nCutibacterium (Propionibacterium) acnes is assumed to play an important role in the pathogenesis of acne.\n\n\nOBJECTIVES\nTo examine if clones with distinct virulence properties are associated with acne.\n\n\nMETHODS\nMultiple C. acnes isolates from follicles and surface skin of patients with moderate to severe acne and healthy controls were characterized by multilocus sequence typing. To determine if CC18 isolates from acne patients differ from those of controls in the possession of virulence genes or lack of genes conducive to a harmonious coexistence the full genomes of dominating CC18 follicular clones from six patients and five controls were sequenced.\n\n\nRESULTS\nIndividuals carried one to ten clones simultaneously. The dominating C. acnes clones in follicles from acne patients were exclusively from the phylogenetic clade I-1a and all belonged to clonal complex CC18 with the exception of one patient dominated by the worldwide-disseminated and often antibiotic resistant clone ST3. The clonal composition of healthy follicles showed a more heterogeneous pattern with follicles dominated by clones representing the phylogenetic clades I-1a, I-1b, I-2 and II. Comparison of follicular CC18 gene contents, allelic versions of putative virulence genes and their promoter regions, and 54 variable-length intragenic and inter-genic homopolymeric tracts showed extensive conservation and no difference associated with the clinical origin of isolates.\n\n\nCONCLUSIONS\nThe study supports that C. acnes strains from clonal complex CC18 and the often antibiotic resistant clone ST3 are associated with acne and suggests that susceptibility of the host rather than differences within these clones may determine the clinical outcome of colonization.",
"title": ""
},
{
"docid": "c157b149d334b2cc1f718d70ef85e75e",
"text": "The large inter-individual variability within the normal population, the limited reproducibility due to habituation or fatigue, and the impact of instruction and the subject's motivation, all constitute a major problem in posturography. These aspects hinder reliable evaluation of the changes in balance control in the case of disease and complicate objectivation of the impact of therapy and sensory input on balance control. In this study, we examine whether measurement of balance control near individualized limits of stability and under very challenging sensory conditions might reduce inter- and intra-individual variability compared to the well-known Sensory Organization Test (SOT). To do so, subjects balance on a platform on which instability increases automatically until body orientation or body sway velocity surpasses a safety limit. The maximum tolerated platform instability is then used as a measure for balance control under 10 different sensory conditions. Ninety-seven healthy subjects and 107 patients suffering from chronic dizziness (whiplash syndrome (n = 25), Meniere's disease (n = 28), acute (n = 28) or gradual (n = 26) peripheral function loss) were tested. In both healthy subjects and patients this approach resulted in a low intra-individual variability (< 14.5(%). In healthy subjects and patients, balance control was maximally affected by closure of the eyes and by vibration of the Achilles' tendons. The other perturbation techniques applied (sway referenced vision or platform, cooling of the foot soles) were less effective. Combining perturbation techniques reduced balance control even more, but the effect was less than the linear summation of the effect induced by the techniques applied separately. The group averages of healthy subjects show that vision contributed maximum 37%, propriocepsis minimum 26%, and labyrinths maximum 44% to balance control in healthy subjects. However, a large inter-individual variability was observed. Balance control of each patient group was less than in healthy subjects in all sensory conditions. Similar to healthy subjects, patients also show a large inter-individual variability, which results in a low sensitivity of the test. With the exception of some minor differences between Whiplash and Meniere patients, balance control did not differ between the four patient groups. This points to a low specificity of the test. Balance control was not correlated with the outcome of the standard vestibular examination. This study strengthens our notion that the contribution of the sensory inputs to balance control differs considerably per individual and may simply be due to differences in the vestibular function related to the specific pathology, but also to differences in motor learning strategies in relation to daily life requirements. It is difficult to provide clinically relevant normative data. We conclude that, like the SOT, the current test is merely a functional test of balance with limited diagnostic value.",
"title": ""
},
{
"docid": "f562bd72463945bd35d42894e4815543",
"text": "Sound levels in animal shelters regularly exceed 100 dB. Noise is a physical stressor on animals that can lead to behavioral, physiological, and anatomical responses. There are currently no policies regulating noise levels in dog kennels. The objective of this study was to evaluate the noise levels dogs are exposed to in an animal shelter on a continuous basis and to determine the need, if any, for noise regulations. Noise levels at a newly constructed animal shelter were measured using a noise dosimeter in all indoor dog-holding areas. These holding areas included large dog adoptable, large dog stray, small dog adoptable, small dog stray, and front intake. The noise level was highest in the large adoptable area. Sound from the large adoptable area affected some of the noise measurements for the other rooms. Peak noise levels regularly exceeded the measuring capability of the dosimeter (118.9 dBA). Often, in new facility design, there is little attention paid to noise abatement, despite the evidence that noise causes physical and psychological stress on dogs. To meet their behavioral and physical needs, kennel design should also address optimal sound range.",
"title": ""
},
{
"docid": "c89b903e497ebe8e8d89e8d1d931fae1",
"text": "Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "eb6ee2fd1f7f1d0d767e4dde2d811bed",
"text": "This paper presents a mathematical framework for performance analysis of Behavior Trees (BTs). BTs are a recent alternative to Finite State Machines (FSMs), for doing modular task switching in robot control architectures. By encoding the switching logic in a tree structure, instead of distributing it in the states of a FSM, modularity and reusability are improved. In this paper, we compute performance measures, such as success/failure probabilities and execution times, for plans encoded and executed by BTs. To do this, we first introduce Stochastic Behavior Trees (SBT), where we assume that the probabilistic performance measures of the basic action controllers are given. We then show how Discrete Time Markov Chains (DTMC) can be used to aggregate these measures from one level of the tree to the next. The recursive structure of the tree then enables us to step by step propagate such estimates from the leaves (basic action controllers) to the root (complete task execution). Finally, we verify our analytical results using massive Monte Carlo simulations, and provide an illustrative example of the results for a complex robotic task.",
"title": ""
},
{
"docid": "eb3f72e91f13a3c6faee53c6d4cd4174",
"text": "Recent studies indicate that nearly 75% of queries issued to Web search engines aim at finding information about entities, which are material objects or concepts that exist in the real world or fiction (e.g. people, organizations, products, etc.). Most common information needs underlying this type of queries include finding a certain entity (e.g. “Einstein relativity theory”), a particular attribute or property of an entity (e.g. “Who founded Intel?”) or a list of entities satisfying a certain criteria (e.g. “Formula 1 drivers that won the Monaco Grand Prix”). These information needs can be efficiently addressed by presenting structured information about a target entity or a list of entities retrieved from a knowledge graph either directly as search results or in addition to the ranked list of documents. This tutorial provides a summary of the recent research in knowledge graph entity representation methods and retrieval models. The first part of this tutorial introduces state-of-the-art methods for entity representation, from multi-fielded documents with flat and hierarchical structure to latent dimensional representations based on tensor factorization, while the second part presents recent developments in entity retrieval models, including Fielded Sequential Dependence Model (FSDM) and its parametric extension (PFSDM), as well as entity set expansion and ranking methods.",
"title": ""
},
{
"docid": "e98e902e22d9b8acb6e9e9dcd241471c",
"text": "We introduce a novel iterative approach for event coreference resolution that gradually builds event clusters by exploiting inter-dependencies among event mentions within the same chain as well as across event chains. Among event mentions in the same chain, we distinguish withinand cross-document event coreference links by using two distinct pairwise classifiers, trained separately to capture differences in feature distributions of withinand crossdocument event clusters. Our event coreference approach alternates between WD and CD clustering and combines arguments from both event clusters after every merge, continuing till no more merge can be made. And then it performs further merging between event chains that are both closely related to a set of other chains of events. Experiments on the ECB+ corpus show that our model outperforms state-of-the-art methods in joint task of WD and CD event coreference resolution.",
"title": ""
},
{
"docid": "2d0d42a6c712d93ace0bf37ffe786a75",
"text": "Personalized search systems tailor search results to the current user intent using historic search interactions. This relies on being able to find pertinent information in that user's search history, which can be challenging for unseen queries and for new search scenarios. Building richer models of users' current and historic search tasks can help improve the likelihood of finding relevant content and enhance the relevance and coverage of personalization methods. The task-based approach can be applied to the current user's search history, or as we focus on here, all users' search histories as so-called \"groupization\" (a variant of personalization whereby other users' profiles can be used to personalize the search experience). We describe a method whereby we mine historic search-engine logs to find other users performing similar tasks to the current user and leverage their on-task behavior to identify Web pages to promote in the current ranking. We investigate the effectiveness of this approach versus query-based matching and finding related historic activity from the current user (i.e., group versus individual). As part of our studies we also explore the use of the on-task behavior of particular user cohorts, such as people who are expert in the topic currently being searched, rather than all other users. Our approach yields promising gains in retrieval performance, and has direct implications for improving personalization in search systems.",
"title": ""
},
{
"docid": "190bf6cd8a2e9a5764b42d01b7aec7c8",
"text": "We propose a method for compiling a class of Σ-protocols (3-move public-coin protocols) into non-interactive zero-knowledge arguments. The method is based on homomorphic encryption and does not use random oracles. It only requires that a private/public key pair is set up for the verifier. The method applies to all known discrete-log based Σ-protocols. As applications, we obtain non-interactive threshold RSA without random oracles, and non-interactive zero-knowledge for NP more efficiently than by previous methods.",
"title": ""
},
{
"docid": "2a0577aa61ca1cbde207306fdb5beb08",
"text": "In recent years, researchers have shown that unwanted web tracking is on the rise, as advertisers are trying to capitalize on users' online activity, using increasingly intrusive and sophisticated techniques. Among these, browser fingerprinting has received the most attention since it allows trackers to uniquely identify users despite the clearing of cookies and the use of a browser's private mode. In this paper, we investigate and quantify the fingerprintability of browser extensions, such as, AdBlock and Ghostery. We show that an extension's organic activity in a page's DOM can be used to infer its presence, and develop XHound, the first fully automated system for fingerprinting browser extensions. By applying XHound to the 10,000 most popular Google Chrome extensions, we find that a significant fraction of popular browser extensions are fingerprintable and could thus be used to supplement existing fingerprinting methods. Moreover, by surveying the installed extensions of 854 users, we discover that many users tend to install different sets of fingerprintable browser extensions and could thus be uniquely, or near-uniquely identifiable by extension-based fingerprinting. We use XHound's results to build a proof-of-concept extension-fingerprinting script and show that trackers can fingerprint tens of extensions in just a few seconds. Finally, we describe why the fingerprinting of extensions is more intrusive than the fingerprinting of other browser and system properties, and sketch two different approaches towards defending against extension-based fingerprinting.",
"title": ""
},
{
"docid": "f794b6914cc99fcd2a13b81e6fbe12d2",
"text": "An unprecedented rise in the number of asylum seekers and refugees was seen in Europe in 2015, and it seems that numbers are not going to be reduced considerably in 2016. Several studies have tried to estimate risk of infectious diseases associated with migration but only very rarely these studies make a distinction on reason for migration. In these studies, workers, students, and refugees who have moved to a foreign country are all taken to have the same disease epidemiology. A common disease epidemiology across very different migrant groups is unlikely, so in this review of infectious diseases in asylum seekers and refugees, we describe infectious disease prevalence in various types of migrants. We identified 51 studies eligible for inclusion. The highest infectious disease prevalence in refugee and asylum seeker populations have been reported for latent tuberculosis (9-45%), active tuberculosis (up to 11%), and hepatitis B (up to 12%). The same population had low prevalence of malaria (7%) and hepatitis C (up to 5%). There have been recent case reports from European countries of cutaneous diphtheria, louse-born relapsing fever, and shigella in the asylum-seeking and refugee population. The increased risk that refugees and asylum seekers have for infection with specific diseases can largely be attributed to poor living conditions during and after migration. Even though we see high transmission in the refugee populations, there is very little risk of spread to the autochthonous population. These findings support the efforts towards creating a common European standard for the health reception and reporting of asylum seekers and refugees.",
"title": ""
},
{
"docid": "be3bf1e95312cc0ce115e3aaac2ecc96",
"text": "This paper contributes a first study into how different human users deliver simultaneous control and feedback signals during human-robot interaction. As part of this work, we formalize and present a general interactive learning framework for online cooperation between humans and reinforcement learning agents. In many humanmachine interaction settings, there is a growing gap between the degrees-of-freedom of complex semi-autonomous systems and the number of human control channels. Simple human control and feedback mechanisms are required to close this gap and allow for better collaboration between humans and machines on complex tasks. To better inform the design of concurrent control and feedback interfaces, we present experimental results from a human-robot collaborative domain wherein the human must simultaneously deliver both control and feedback signals to interactively train an actor-critic reinforcement learning robot. We compare three experimental conditions: 1) human delivered control signals, 2) reward-shaping feedback signals, and 3) simultaneous control and feedback. Our results suggest that subjects provide less feedback when simultaneously delivering feedback and control signals and that control signal quality is not significantly diminished. Our data suggest that subjects may also modify when and how they provide feedback. Through algorithmic development and tuning informed by this study, we expect semi-autonomous actions of robotic agents can be better shaped by human feedback, allowing for seamless collaboration and improved performance in difficult interactive domains. University of Alberta, Dep. of Computing Science, Edmonton, Canada University of Alberta, Deps. of Medicine and Computing Science, Edmonton, Alberta, Canada. Correspondence to: Kory Mathewson <korym@ualberta.ca>. Under review for the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the authors. Figure 1. Experimental configuration. One of the study participants with the Myo band on their right arm providing a control signal, while simultaneously providing feedback signals with their left hand. The Aldebaran Nao robot simulation is visible on the screen alongside experimental logging.",
"title": ""
},
{
"docid": "e4fb31ebacb093932517719884264b46",
"text": "Monitoring and control the environmental parameters in agricultural constructions are essential to improve energy efficiency and productivity. Real-time monitoring allows the detection and early correction of unfavourable situations, optimizing consumption and protecting crops against diseases. This work describes an automatic system for monitoring farm environments with the aim of increasing efficiency and quality of the agricultural environment. Based on the Internet of Things, the system uses a low-cost wireless sensor network, called Sun Spot, programmed in Java, with the Java VM running on the device itself and the Arduino platform for Internet connection. The data collected is shared through the social network of Facebook. The temperature and brightness parameters are monitored in real time. Other sensors can be added to monitor the issue for specific purposes. The results show that conditions within greenhouses may in some cases be very different from those expected. Therefore, the proposed system can provide an effective tool to improve the quality of agricultural production and energy efficiency.",
"title": ""
},
{
"docid": "370ec5c556b70ead92bc45d1f419acaf",
"text": "Despite the identification of circulating tumor cells (CTCs) and cell-free DNA (cfDNA) as potential blood-based biomarkers capable of providing prognostic and predictive information in cancer, they have not been incorporated into routine clinical practice. This resistance is due in part to technological limitations hampering CTC and cfDNA analysis, as well as a limited understanding of precisely how to interpret emergent biomarkers across various disease stages and tumor types. In recognition of these challenges, a group of researchers and clinicians focused on blood-based biomarker development met at the Canadian Cancer Trials Group (CCTG) Spring Meeting in Toronto, Canada on 29 April 2016 for a workshop discussing novel CTC/cfDNA technologies, interpretation of data obtained from CTCs versus cfDNA, challenges regarding disease evolution and heterogeneity, and logistical considerations for incorporation of CTCs/cfDNA into clinical trials, and ultimately into routine clinical use. The objectives of this workshop included discussion of the current barriers to clinical implementation and recent progress made in the field, as well as fueling meaningful collaborations and partnerships between researchers and clinicians. We anticipate that the considerations highlighted at this workshop will lead to advances in both basic and translational research and will ultimately impact patient management strategies and patient outcomes.",
"title": ""
},
{
"docid": "86fca69ae48592e06109f7b05180db28",
"text": "Background: The software development industry has been adopting agile methods instead of traditional software development methods because they are more flexible and can bring benefits such as handling requirements changes, productivity gains and business alignment. Objective: This study seeks to evaluate, synthesize, and present aspects of research on agile methods tailoring including the method tailoring approaches adopted and the criteria used for agile practice selection. Method: The method adopted was a Systematic Literature Review (SLR) on studies published from 2002 to 2014. Results: 56 out of 783 papers have been identified as describing agile method tailoring approaches. These studies have been identified as case studies regarding the empirical research, as solution proposals regarding the research type, and as evaluation studies regarding the research validation type. Most of the papers used method engineering to implement tailoring and were not specific to any agile method on their scope. Conclusion: Most of agile methods tailoring research papers proposed or improved a technique, were implemented as case studies analyzing one case in details and validated their findings using evaluation. Method engineering was the base for tailoring, the approaches are independent of agile method and the main criteria used are internal environment and objectives variables. © 2015 Elsevier Inc. All rights reserved.",
"title": ""
}
] | scidocsrr |
fc572685aa55c813ea4803ee813b4801 | Proposal : Scalable , Active and Flexible Learning on Distributions | [
{
"docid": "9e3057c25630bfdf5e7ebcc53b6995b0",
"text": "We present a new solution to the ``ecological inference'' problem, of learning individual-level associations from aggregate data. This problem has a long history and has attracted much attention, debate, claims that it is unsolvable, and purported solutions. Unlike other ecological inference techniques, our method makes use of unlabeled individual-level data by embedding the distribution over these predictors into a vector in Hilbert space. Our approach relies on recent learning theory results for distribution regression, using kernel embeddings of distributions. Our novel approach to distribution regression exploits the connection between Gaussian process regression and kernel ridge regression, giving us a coherent, Bayesian approach to learning and inference and a convenient way to include prior information in the form of a spatial covariance function. Our approach is highly scalable as it relies on FastFood, a randomized explicit feature representation for kernel embeddings. We apply our approach to the challenging political science problem of modeling the voting behavior of demographic groups based on aggregate voting data. We consider the 2012 US Presidential election, and ask: what was the probability that members of various demographic groups supported Barack Obama, and how did this vary spatially across the country? Our results match standard survey-based exit polling data for the small number of states for which it is available, and serve to fill in the large gaps in this data, at a much higher degree of granularity.",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
},
{
"docid": "2bdaaeb18db927e2140c53fcc8d4fa30",
"text": "Many information gathering problems require determining the set of points, for which an unknown function takes value above or below some given threshold level. As a concrete example, in the context of environmental monitoring of Lake Zurich we would like to estimate the regions of the lake where the concentration of chlorophyll or algae is greater than some critical value, which would serve as an indicator of algal bloom phenomena. A critical factor in such applications is the high cost in terms of time, baery power, etc. that is associated with each measurement, therefore it is important to be careful about selecting “informative” locations to sample, in order to reduce the total sampling effort required. We formalize the task of level set estimation as a classification problem with sequential measurements, where the unknown function is modeled as a sample from a Gaussian process (GP). We propose LSE, an active learning algorithm that guides both sampling and classification based on GP-derived confidence bounds, and provide theoretical guarantees about its sample complexity. Furthermore, we extend LSE and its theory to two more natural seings: (1) where the threshold level is implicitly defined as a percentage of the (unknown) maximum of the target function and (2) where samples are selected in batches. Based on the laer extension we also propose a simple path planning algorithm. We evaluate the effectiveness of our proposed methods on two problems of practical interest, namely the aforementioned autonomous monitoring of algal populations in Lake Zurich and geolocating network latency.",
"title": ""
}
] | [
{
"docid": "5b0e33ede34f6532a48782e423128f49",
"text": "The literature on globalisation reveals wide agreement concerning the relevance of international sourcing strategies as key competitive factors for companies seeking globalisation, considering such strategies to be a purchasing management approach focusing on supplies from vendors in the world market, rather than relying exclusively on domestic offerings (Petersen, Frayer, & Scannel, 2000; Stevens, 1995; Trent & Monczka, 1998). Thus, the notion of “international sourcing” mentioned by these authors describes the level of supply globalisation in companies’ purchasing strategy, as related to supplier source (Giunipero & Pearcy, 2000; Levy, 1995; Trent & Monczka, 2003b).",
"title": ""
},
{
"docid": "0a3d4b02d2273087c50b8b0d77fb8c36",
"text": "Circulation. 2017;135:e867–e884. DOI: 10.1161/CIR.0000000000000482 April 11, 2017 e867 ABSTRACT: Multiple randomized controlled trials (RCTs) have assessed the effects of supplementation with eicosapentaenoic acid plus docosahexaenoic acid (omega-3 polyunsaturated fatty acids, commonly called fish oils) on the occurrence of clinical cardiovascular diseases. Although the effects of supplementation for the primary prevention of clinical cardiovascular events in the general population have not been examined, RCTs have assessed the role of supplementation in secondary prevention among patients with diabetes mellitus and prediabetes, patients at high risk of cardiovascular disease, and those with prevalent coronary heart disease. In this scientific advisory, we take a clinical approach and focus on common indications for omega-3 polyunsaturated fatty acid supplements related to the prevention of clinical cardiovascular events. We limited the scope of our review to large RCTs of supplementation with major clinical cardiovascular disease end points; meta-analyses were considered secondarily. We discuss the features of available RCTs and provide the rationale for our recommendations. We then use existing American Heart Association criteria to assess the strength of the recommendation and the level of evidence. On the basis of our review of the cumulative evidence from RCTs designed to assess the effect of omega-3 polyunsaturated fatty acid supplementation on clinical cardiovascular events, we update prior recommendations for patients with prevalent coronary heart disease, and we offer recommendations, when data are available, for patients with other clinical indications, including patients with diabetes mellitus and prediabetes and those with high risk of cardiovascular disease, stroke, heart failure, and atrial fibrillation. David S. Siscovick, MD, MPH, FAHA, Chair Thomas A. Barringer, MD, FAHA Amanda M. Fretts, PhD, MPH Jason H.Y. Wu, PhD, MSc, FAHA Alice H. Lichtenstein, DSc, FAHA Rebecca B. Costello, PhD, FAHA Penny M. Kris-Etherton, PhD, RD, FAHA Terry A. Jacobson, MD, FAHA Mary B. Engler, PhD, RN, MS, FAHA Heather M. Alger, PhD Lawrence J. Appel, MD, MPH, FAHA Dariush Mozaffarian, MD, DrPH, FAHA On behalf of the American Heart Association Nutrition Committee of the Council on Lifestyle and Cardiometabolic Health; Council on Epidemiology and Prevention; Council on Cardiovascular Disease in the Young; Council on Cardiovascular and Stroke Nursing; and Council on Clinical Cardiology Omega-3 Polyunsaturated Fatty Acid (Fish Oil) Supplementation and the Prevention of Clinical Cardiovascular Disease",
"title": ""
},
{
"docid": "da61794b9ffa1f6f4bc39cef9655bf77",
"text": "This manuscript analyzes the effects of design parameters, such as aspect ratio, doping concentration and bias, on the performance of a general CMOS Hall sensor, with insight on current-related sensitivity, power consumption, and bandwidth. The article focuses on rectangular-shaped Hall probes since this is the most general geometry leading to shape-independent results. The devices are analyzed by means of 3D-TCAD simulations embedding galvanomagnetic transport model, which takes into account the Lorentz force acting on carriers due to a magnetic field. Simulation results define a set of trade-offs and design rules that can be used by electronic designers to conceive their own Hall probes.",
"title": ""
},
{
"docid": "fe4954b2b96a0ab95f5eedfca9b12066",
"text": "Marketing historically has undergone various shifts in emphasis from production through sales to marketing orientation. However, the various orientations have failed to engage customers in meaningful relationship mutually beneficial to organisations and customers, with all forms of the shift still exhibiting the transactional approach inherit in traditional marketing (Kubil & Doku, 2010). However, Coltman (2006) indicates that in strategy and marketing literature, scholars have long suggested that a customer centred strategy is fundamental to competitive advantage and that customer relationship management (CRM) programmes are increasingly being used by organisations to support the type of customer understanding and interdepartmental connectedness required to effectively execute a customer strategy.",
"title": ""
},
{
"docid": "f3fdc63904e2bf79df8b6ca30a864fd3",
"text": "Although the potential benefits of a powered ankle-foot prosthesis have been well documented, no one has successfully developed and verified that such a prosthesis can improve amputee gait compared to a conventional passive-elastic prosthesis. One of the main hurdles that hinder such a development is the challenge of building an ankle-foot prosthesis that matches the size and weight of the intact ankle, but still provides a sufficiently large instantaneous power output and torque to propel an amputee. In this paper, we present a novel, powered ankle-foot prosthesis that overcomes these design challenges. The prosthesis comprises an unidirectional spring, configured in parallel with a force-controllable actuator with series elasticity. With this architecture, the ankle-foot prosthesis matches the size and weight of the human ankle, and is shown to be satisfying the restrictive design specifications dictated by normal human ankle walking biomechanics.",
"title": ""
},
{
"docid": "e8fee9f93106ce292c89c26be373030f",
"text": "As a non-invasive imaging modality, optical coherence tomography (OCT) can provide micrometer-resolution 3D images of retinal structures. Therefore it is commonly used in the diagnosis of retinal diseases associated with edema in and under the retinal layers. In this paper, a new framework is proposed for the task of fluid segmentation and detection in retinal OCT images. Based on the raw images and layers segmented by a graph-cut algorithm, a fully convolutional neural network was trained to recognize and label the fluid pixels. Random forest classification was performed on the segmented fluid regions to detect and reject the falsely labeled fluid regions. The leave-one-out cross validation experiments on the RETOUCH database show that our method performs well in both segmentation (mean Dice: 0.7317) and detection (mean AUC: 0.985) tasks.",
"title": ""
},
{
"docid": "181356b104a26d1d300d10619fb78f45",
"text": "Recent advances in combining deep neural network architectures with reinforcement learning techniques have shown promising potential results in solving complex control problems with high dimensional state and action spaces. Inspired by these successes, in this paper, we build two kinds of reinforcement learning algorithms: deep policy-gradient and value-function based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The policy-gradient based agent maps its observation directly to the control signal, however the value-function based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Our methods show promising results in a traffic network simulated in the SUMO traffic simulator, without suffering from instability issues during the training process.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "3f225efbccb63d0c5170fce44fadb3c6",
"text": "Pelvic pain is a common gynaecological complaint, sometimes without any obvious etiology. We report a case of pelvic congestion syndrome, an often overlooked cause of pelvic pain, diagnosed by helical computed tomography. This seems to be an effective and noninvasive imaging modality. RID=\"\"ID=\"\"<e5>Correspondence to:</e5> J. H. Desimpelaere",
"title": ""
},
{
"docid": "3ca2d95885303f1ab395bd31d32df0c2",
"text": "Curiosity to predict personality, behavior and need for this is not as new as invent of social media. Personality prediction to better accuracy could be very useful for society. There are many papers and researches conducted on usefulness of the data for various purposes like in marketing, dating suggestions, organization development, personalized recommendations and health care to name a few. With the introduction and extreme popularity of Online Social Networking Sites like Facebook, Twitter and LinkedIn numerous researches were conducted based on public data available, online social networking applications and social behavior towards friends and followers to predict the personality. Structured mining of the social media content can provide us the ability to predict some personality traits. This survey aims at providing researchers with an overview of various strategies used for studies and research concentrating on predicting user personality and behavior using online social networking site content. There positives, limitations are well summarized as reported in the literature. Finally, a brief discussion including open issues for further research in the area of social networking site based personality prediction preceding conclusion.",
"title": ""
},
{
"docid": "5fa0ae0baaa954fb2ab356719f8ca629",
"text": "Estimating the pose of a camera (virtual or real) in which some augmentation takes place is one of the most important parts of an augmented reality (AR) system. Availability of powerful processors and fast frame grabbers have made vision-based trackers commonly used due to their accuracy as well as flexibility and ease of use. Current vision-based trackers are based on tracking of markers. The use of markers increases robustness and reduces computational requirements. However, their use can be very complicated, as they require certain maintenance. Direct use of scene features for tracking, therefore, is desirable. To this end, we describe a general system that tracks the position and orientation of a camera observing a scene without any visual markers. Our method is based on a two-stage process. In the first stage, a set of features is learned with the help of an external tracking system while in action. The second stage uses these learned features for camera tracking when the system in the first stage decides that it is possible to do so. The system is very general so that it can employ any available feature tracking and pose estimation system for learning and tracking. We experimentally demonstrate the viability of the method in real-life examples.",
"title": ""
},
{
"docid": "5ca29a94ac01f9ad20249021802b1746",
"text": "Big Data has become a very popular term. It refers to the enormous amount of structured, semi-structured and unstructured data that are exponentially generated by high-performance applications in many domains: biochemistry, genetics, molecular biology, physics, astronomy, business, to mention a few. Since the literature of Big Data has increased significantly in recent years, it becomes necessary to develop an overview of the state-of-the-art in Big Data. This paper aims to provide a comprehensive review of Big Data literature of the last 4 years, to identify the main challenges, areas of application, tools and emergent trends of Big Data. To meet this objective, we have analyzed and classified 457 papers concerning Big Data. This review gives relevant information to practitioners and researchers about the main trends in research and application of Big Data in different technical domains, as well as a reference overview of Big Data tools.",
"title": ""
},
{
"docid": "f8ea80edbb4f31d5c0d1a2da5e8aae13",
"text": "BACKGROUND\nPremenstrual syndrome (PMS) is a common condition, and for 5% of women, the influence is so severe as to interfere with their mental health, interpersonal relationships, or studies. Severe PMS may result in decreased occupational productivity. The aim of this study was to investigate the influence of perception of PMS on evaluation of work performance.\n\n\nMETHODS\nA total of 1971 incoming female university students were recruited in September 2009. A simulated clinical scenario was used, with a test battery including measurement of psychological symptoms and the Chinese Premenstrual Symptom Questionnaire.\n\n\nRESULTS\nWhen evaluating employee performance in the simulated scenario, 1565 (79.4%) students neglected the impact of PMS, while 136 (6.9%) students considered it. Multivariate logistic regression showed that perception of daily function impairment due to PMS and frequency of measuring body weight were significantly associated with consideration of the influence of PMS on evaluation of work performance.\n\n\nCONCLUSION\nIt is important to increase the awareness of functional impairments related to severe PMS.",
"title": ""
},
{
"docid": "f7c2ebd19c41b697d52850a225bfe8a0",
"text": "There is currently a misconception among designers and users of free space laser communication (lasercom) equipment that 1550 nm light suffers from less atmospheric attenuation than 785 or 850 nm light in all weather conditions. This misconception is based upon a published equation for atmospheric attenuation as a function of wavelength, which is used frequently in the free-space lasercom literature. In hazy weather (visibility > 2 km), the prediction of less atmospheric attenuation at 1550 nm is most likely true. However, in foggy weather (visibility < 500 m), it appears that the attenuation of laser light is independent of wavelength, ie. 785 nm, 850 nm, and 1550 nm are all attenuated equally by fog. This same wavelength independence is also observed in snow and rain. This observation is based on an extensive literature search, and from full Mie scattering calculations. A modification to the published equation describing the atmospheric attenuation of laser power, which more accurately describes the effects of fog, is offered. This observation of wavelength-independent attenuation in fog is important, because fog, heavy snow, and extreme rain are the only types of weather that are likely to disrupt short (<500 m) lasercom links. Short lasercom links will be necessary to meet the high availability requirements of the telecommunications industry.",
"title": ""
},
{
"docid": "485270200008a292cefdb1e952441113",
"text": "This paper describes the prototype design, specimen design, experimental setup, and experimental results of three steel plate shear wall concepts. Prototype light-gauge steel plate shear walls are designed as seismic retrofits for a hospital st area of high seismicity, and emphasis is placed on minimizing their impact on the existing framing. Three single-story test spe designed using these prototypes as a basis, two specimens with flat infill plates (thicknesses of 0.9 mm ) and a third using a corrugat infill plate (thickness of 0.7 mm). Connection of the infill plates to the boundary frames is achieved through the use of b combination with industrial strength epoxy or welds, allowing for mobility of the infills if desired. Testing of the systems is don quasi-static conditions. It is shown that one of the flat infill plate specimens, as well as the specimen utilizing a corrugated in achieve significant ductility and energy dissipation while minimizing the demands placed on the surrounding framing. Exp results are compared to monotonic pushover predictions from computer analysis using a simple model and good agreement DOI: 10.1061/ (ASCE)0733-9445(2005)131:2(259) CE Database subject headings: Shear walls; Experimentation; Retrofitting; Seismic design; Cyclic design; Steel plates . d the field g of be a ; 993; rot are have ds ts istexfrom ctive is to y seis eintrofit reatn to the ular are r light-",
"title": ""
},
{
"docid": "3564cf609cf1b9666eaff7edcd12a540",
"text": "Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.",
"title": ""
},
{
"docid": "176d0bf9525d6dd9bd4837b174e4f769",
"text": "Prader-Willi syndrome (PWS) is a genetic disorder frequently characterized by obesity, growth hormone deficiency, genital abnormalities, and hypogonadotropic hypogonadism. Incomplete or delayed pubertal development as well as premature adrenarche are usually found in PWS, whereas central precocious puberty (CPP) is very rare. This study aimed to report the clinical and biochemical follow-up of a PWS boy with CPP and to discuss the management of pubertal growth. By the age of 6, he had obesity, short stature, and many clinical criteria of PWS diagnosis, which was confirmed by DNA methylation test. Therapy with recombinant human growth hormone (rhGH) replacement (0.15 IU/kg/day) was started. Later, he presented psychomotor agitation, aggressive behavior, and increased testicular volume. Laboratory analyses were consistent with the diagnosis of CPP (gonadorelin-stimulated LH peak 15.8 IU/L, testosterone 54.7 ng/dL). The patient was then treated with gonadotropin-releasing hormone analog (GnRHa). Hypothalamic dysfunctions have been implicated in hormonal disturbances related to pubertal development, but no morphologic abnormalities were detected in the present case. Additional methylation analysis (MS-MLPA) of the chromosome 15q11 locus confirmed PWS diagnosis. We presented the fifth case of CPP in a genetically-confirmed PWS male. Combined therapy with GnRHa and rhGH may be beneficial in this rare condition of precocious pubertal development in PWS.",
"title": ""
},
{
"docid": "3a3f3e1c0eac36d53a40d7639c3d65cc",
"text": "The aim of this paper is to present a hybrid approach to accurate quantification of vascular structures from magnetic resonance angiography (MRA) images using level set methods and deformable geometric models constructed with 3-D Delaunay triangulation. Multiple scale filtering based on the analysis of local intensity structure using the Hessian matrix is used to effectively enhance vessel structures with various diameters. The level set method is then applied to automatically segment vessels enhanced by the filtering with a speed function derived from enhanced MRA images. Since the goal of this paper is to obtain highly accurate vessel borders, suitable for use in fluid flow simulations, in a subsequent step, the vessel surface determined by the level set method is triangulated using 3-D Delaunay triangulation and the resulting surface is used as a parametric deformable model. Energy minimization is then performed within a variational setting with a first-order internal energy; the external energy is derived from 3-D image gradients. Using the proposed method, vessels are accurately segmented from MRA data.",
"title": ""
},
{
"docid": "9f469cdc1864aad2026630a29c210c1f",
"text": "This paper proposes an asymptotically optimal hybrid beamforming solution for large antenna arrays by exploiting the properties of the singular vectors of the channel matrix. It is shown that the elements of the channel matrix with Rayleigh fading follow a normal distribution when large antenna arrays are employed. The proposed beamforming algorithm is effective in both sparse and rich propagation environments, and is applicable for both point-to-point and multiuser scenarios. In addition, a closed-form expression and a lower bound for the achievable rates are derived when analog and digital phase shifters are employed. It is shown that the performance of the hybrid beamformers using phase shifters with more than 2-bit resolution is comparable with analog phase shifting. A novel phase shifter selection scheme that reduces the power consumption at the phase shifter network is proposed when the wireless channel is modeled by Rayleigh fading. Using this selection scheme, the spectral efficiency can be increased as the power consumption in the phase shifter network reduces. Compared with the scenario that all of the phase shifters are in operation, the simulation results indicate that the spectral efficiency increases when up to 50% of phase shifters are turned OFF.",
"title": ""
},
{
"docid": "1e493440a61578c8c6ca8fbe63f475d6",
"text": "3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies — a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that data representation (rather than its quality) accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert imagebased depth maps to pseudo-LiDAR representations — essentially mimicking LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing stateof-the-art in image-based performance — raising the detection accuracy of objects within 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo image based approaches.",
"title": ""
}
] | scidocsrr |
2e9015433f83b79fb13724ffacc0bdad | Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability | [
{
"docid": "ad7f49832562d27534f11b162e28f51b",
"text": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.",
"title": ""
}
] | [
{
"docid": "45eb2d7b74f485e9eeef584555e38316",
"text": "With the increasing demand of massive multimodal data storage and organization, cross-modal retrieval based on hashing technique has drawn much attention nowadays. It takes the binary codes of one modality as the query to retrieve the relevant hashing codes of another modality. However, the existing binary constraint makes it difficult to find the optimal cross-modal hashing function. Most approaches choose to relax the constraint and perform thresholding strategy on the real-value representation instead of directly solving the original objective. In this paper, we first provide a concrete analysis about the effectiveness of multimodal networks in preserving the inter- and intra-modal consistency. Based on the analysis, we provide a so-called Deep Binary Reconstruction (DBRC) network that can directly learn the binary hashing codes in an unsupervised fashion. The superiority comes from a proposed simple but efficient activation function, named as Adaptive Tanh (ATanh). The ATanh function can adaptively learn the binary codes and be trained via back-propagation. Extensive experiments on three benchmark datasets demonstrate that DBRC outperforms several state-of-the-art methods in both image2text and text2image retrieval task.",
"title": ""
},
{
"docid": "23cc8b190e9de5177cccf2f918c1ad45",
"text": "NFC is a standardised technology providing short-range RFID communication channels for mobile devices. Peer-to-peer applications for mobile devices are receiving increased interest and in some cases these services are relying on NFC communication. It has been suggested that NFC systems are particularly vulnerable to relay attacks, and that the attacker’s proxy devices could even be implemented using off-the-shelf NFC-enabled devices. This paper describes how a relay attack can be implemented against systems using legitimate peer-to-peer NFC communication by developing and installing suitable MIDlets on the attacker’s own NFC-enabled mobile phones. The attack does not need to access secure program memory nor use any code signing, and can use publicly available APIs. We go on to discuss how relay attack countermeasures using device location could be used in the mobile environment. These countermeasures could also be applied to prevent relay attacks on contactless applications using ‘passive’ NFC on mobile phones.",
"title": ""
},
{
"docid": "e94afab2ce61d7426510a5bcc88f7ca8",
"text": "Community detection is an important task in network analysis, in which we aim to learn a network partition that groups together vertices with similar community-level connectivity patterns. By finding such groups of vertices with similar structural roles, we extract a compact representation of the network’s large-scale structure, which can facilitate its scientific interpretation and the prediction of unknown or future interactions. Popular approaches, including the stochastic block model, assume edges are unweighted, which limits their utility by discarding potentially useful information. We introduce the weighted stochastic block model (WSBM), which generalizes the stochastic block model to networks with edge weights drawn from any exponential family distribution. This model learns from both the presence and weight of edges, allowing it to discover structure that would otherwise be hidden when weights are discarded or thresholded. We describe a Bayesian variational algorithm for efficiently approximating this model’s posterior distribution over latent block structures. We then evaluate the WSBM’s performance on both edge-existence and edge-weight prediction tasks for a set of real-world weighted networks. In all cases, the WSBM performs as well or better than the best alternatives on these tasks. community detection, weighted relational data, block models, exponential family, variational Bayes.",
"title": ""
},
{
"docid": "de99a984795645bc2e9fb4b3e3173807",
"text": "Neural networks are a family of powerful machine learning models. is book focuses on the application of neural network models to natural language data. e first half of the book (Parts I and II) covers the basics of supervised machine learning and feed-forward neural networks, the basics of working with machine learning over language data, and the use of vector-based rather than symbolic representations for words. It also covers the computation-graph abstraction, which allows to easily define and train arbitrary neural networks, and is the basis behind the design of contemporary neural network software libraries. e second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. ese architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning.",
"title": ""
},
{
"docid": "2be58a0a458115fb9ef00627cc0580e0",
"text": "OBJECTIVE\nTo determine the physical and psychosocial impact of macromastia on adolescents considering reduction mammaplasty in comparison with healthy adolescents.\n\n\nMETHODS\nThe following surveys were administered to adolescents with macromastia and control subjects, aged 12 to 21 years: Short-Form 36v2, Rosenberg Self-Esteem Scale, Breast-Related Symptoms Questionnaire, and Eating-Attitudes Test-26 (EAT-26). Demographic variables and self-reported breast symptoms were compared between the 2 groups. Linear regression models, unadjusted and adjusted for BMI category (normal weight, overweight, obese), were fit to determine the effect of case status on survey score. Odds ratios for the risk of disordered eating behaviors (EAT-26 score ≥ 20) in cases versus controls were also determined.\n\n\nRESULTS\nNinety-six subjects with macromastia and 103 control subjects participated in this study. Age was similar between groups, but subjects with macromastia had a higher BMI (P = .02). Adolescents with macromastia had lower Short-Form 36v2 domain, Rosenberg Self-Esteem Scale, and Breast-Related Symptoms Questionnaire scores and higher EAT-26 scores compared with controls. Macromastia was also associated with a higher risk of disordered eating behaviors. In almost all cases, the impact of macromastia was independent of BMI category.\n\n\nCONCLUSIONS\nMacromastia has a substantial negative impact on health-related quality of life, self-esteem, physical symptoms, and eating behaviors in adolescents with this condition. These observations were largely independent of BMI category. Health care providers should be aware of these important negative health outcomes that are associated with macromastia and consider early evaluation for adolescents with this condition.",
"title": ""
},
{
"docid": "d43dc521d3f0f17ccd4840d6081dcbfe",
"text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.",
"title": ""
},
{
"docid": "447c5b2db5b1d7555cba2430c6d73a35",
"text": "Recent years have seen a proliferation of complex Advanced Driver Assistance Systems (ADAS), in particular, for use in autonomous cars. These systems consist of sensors and cameras as well as image processing and decision support software components. They are meant to help drivers by providing proper warnings or by preventing dangerous situations. In this paper, we focus on the problem of design time testing of ADAS in a simulated environment. We provide a testing approach for ADAS by combining multi-objective search with surrogate models developed based on neural networks. We use multi-objective search to guide testing towards the most critical behaviors of ADAS. Surrogate modeling enables our testing approach to explore a larger part of the input search space within limited computational resources. We characterize the condition under which the multi-objective search algorithm behaves the same with and without surrogate modeling, thus showing the accuracy of our approach. We evaluate our approach by applying it to an industrial ADAS system. Our experiment shows that our approach automatically identifies test cases indicating critical ADAS behaviors. Further, we show that combining our search algorithm with surrogate modeling improves the quality of the generated test cases, especially under tight and realistic computational resources.",
"title": ""
},
{
"docid": "47bf54c0d51596f39929e8f3e572a051",
"text": "Parameterizations of triangulated surfaces are used in an increasing number of mesh processing applications for various purposes. Although demands vary, they are often required to preserve the surface metric and thus minimize angle, area and length deformation. However, most of the existing techniques primarily target at angle preservation while disregarding global area deformation. In this paper an energy functional is proposed, that quantifies angle and global area deformations simultaneously, while the relative importance between angle and area preservation can be controlled by the user through a parameter. We show how this parameter can be chosen to obtain parameterizations, that are optimized for an uniform sampling of the surface of a model. Maps obtained by minimizing this energy are well suited for applications that desire an uniform surface sampling, like re-meshing or mapping regularly patterned textures. Besides being invariant under rotation and translation of the domain, the energy is designed to prevent face flips during minimization and does not require a fixed boundary in the parameter domain. Although the energy is nonlinear, we show how it can be minimized efficiently using non-linear conjugate gradient methods in a hierarchical optimization framework and prove the convergence of the algorithm. The ability to control the tradeoff between the degree of angle and global area preservation is demonstrated for several models of varying complexity.",
"title": ""
},
{
"docid": "e1bee61b205d29db6b2ebbaf95e9c20b",
"text": "Despite the fact that there are thousands of programming languages existing there is a huge controversy about what language is better to solve a particular problem. In this paper we discuss requirements for programming language with respect to AGI research. In this article new language will be presented. Unconventional features (e.g. probabilistic programming and partial evaluation) are discussed as important parts of language design and implementation. Besides, we consider possible applications to particular problems related to AGI. Language interpreter for Lisp-like probabilistic mixed paradigm programming language is implemented in Haskell.",
"title": ""
},
{
"docid": "3a1019c31ff34f8a45c65703c1528fc4",
"text": "The increasing trend of studying the innate softness of robotic structures and amalgamating it with the benefits of the extensive developments in the field of embodied intelligence has led to sprouting of a relatively new yet extremely rewarding sphere of technology. The fusion of current deep reinforcement algorithms with physical advantages of a soft bio-inspired structure certainly directs us to a fruitful prospect of designing completely self-sufficient agents that are capable of learning from observations collected from their environment to achieve a task they have been assigned. For soft robotics structure possessing countless degrees of freedom, it is often not easy (something not even possible) to formulate mathematical constraints necessary for training a deep reinforcement learning (DRL) agent for the task in hand, hence, we resolve to imitation learning techniques due to ease of manually performing such tasks like manipulation that could be comfortably mimicked by our agent. Deploying current imitation learning algorithms on soft robotic systems have been observed to provide satisfactory results but there are still challenges in doing so. This review article thus posits an overview of various such algorithms along with instances of them being applied to real world scenarios and yielding state-of-the-art results followed by brief descriptions on various pristine branches of DRL research that may be centers of future research in this field of interest.",
"title": ""
},
{
"docid": "4d73c50244d16dab6d3773dbeebbae98",
"text": "We describe the latest version of Microsoft's conversational speech recognition system for the Switchboard and CallHome domains. The system adds a CNN-BLSTM acoustic model to the set of model architectures we combined previously, and includes character-based and dialog session aware LSTM language models in rescoring. For system combination we adopt a two-stage approach, whereby acoustic model posteriors are first combined at the senone/frame level, followed by a word-level voting via confusion networks. We also added another language model rescoring step following the confusion network combination. The resulting system yields a 5.1% word error rate on the NIST 2000 Switchboard test set, and 9.8% on the CallHome subset.",
"title": ""
},
{
"docid": "79ad27cffbbcbe3a49124abd82c6e477",
"text": "In this paper we address the following problem in web document and information retrieval (IR): How can we use long-term context information to gain better IR performance? Unlike common IR methods that use bag of words representation for queries and documents, we treat them as a sequence of words and use long short term memory (LSTM) to capture contextual dependencies. To the best of our knowledge, this is the first time that LSTM is applied to information retrieval tasks. Unlike training traditional LSTMs, the training strategy is different due to the special nature of information retrieval problem. Experimental evaluation on an IR task derived from the Bing web search demonstrates the ability of the proposed method in addressing both lexical mismatch and long-term context modelling issues, thereby, significantly outperforming existing state of the art methods for web document retrieval task.",
"title": ""
},
{
"docid": "b0766f310c4926b475bb646911a27f34",
"text": "Currently, two frameworks of causal reasoning compete: Whereas dependency theories focus on dependencies between causes and effects, dispositional theories model causation as an interaction between agents and patients endowed with intrinsic dispositions. One important finding providing a bridge between these two frameworks is that failures of causes to generate their effects tend to be differentially attributed to agents and patients regardless of their location on either the cause or the effect side. To model different types of error attribution, we augmented a causal Bayes net model with separate error sources for causes and effects. In several experiments, we tested this new model using the size of Markov violations as the empirical indicator of differential assumptions about the sources of error. As predicted by the model, the size of Markov violations was influenced by the location of the agents and was moderated by the causal structure and the type of causal variables.",
"title": ""
},
{
"docid": "569700bd1114b1b93a13af25b2051631",
"text": "Empathy and sympathy play crucial roles in much of human social interaction and are necessary components for healthy coexistence. Sympathy is thought to be a proxy for motivating prosocial behavior and providing the affective and motivational base for moral development. The purpose of the present study was to use functional MRI to characterize developmental changes in brain activation in the neural circuits underpinning empathy and sympathy. Fifty-seven individuals, whose age ranged from 7 to 40 years old, were presented with short animated visual stimuli depicting painful and non-painful situations. These situations involved either a person whose pain was accidentally caused or a person whose pain was intentionally inflicted by another individual to elicit empathic (feeling as the other) or sympathetic (feeling concern for the other) emotions, respectively. Results demonstrate monotonic age-related changes in the amygdala, supplementary motor area, and posterior insula when participants were exposed to painful situations that were accidentally caused. When participants observed painful situations intentionally inflicted by another individual, age-related changes were detected in the dorsolateral prefrontal and ventromedial prefrontal cortex, with a gradual shift in that latter region from its medial to its lateral portion. This pattern of activation reflects a change from a visceral emotional response critical for the analysis of the affective significance of stimuli to a more evaluative function. Further, these data provide evidence for partially distinct neural mechanisms subserving empathy and sympathy, and demonstrate the usefulness of a developmental neurobiological approach to the new emerging area of moral neuroscience.",
"title": ""
},
{
"docid": "023302562ddfe48ac81943fedcf881b7",
"text": "Knitty is an interactive design system for creating knitted animals. The user designs a 3D surface model using a sketching interface. The system automatically generates a knitting pattern and then visualizes the shape of the resulting 3D animal model by applying a simple physics simulation. The user can see the resulting shape before beginning the actual knitting. The system also provides a production assistant interface for novices. The user can easily understand how to knit each stitch and what to do in each step. In a workshop for novices, we observed that even children can design their own knitted animals using our system.",
"title": ""
},
{
"docid": "691032ab4d9bcc1f536b1b8a5d8e73ae",
"text": "Many decisions must be made under stress, and many decision situations elicit stress responses themselves. Thus, stress and decision making are intricately connected, not only on the behavioral level, but also on the neural level, i.e., the brain regions that underlie intact decision making are regions that are sensitive to stress-induced changes. The purpose of this review is to summarize the findings from studies that investigated the impact of stress on decision making. The review includes those studies that examined decision making under stress in humans and were published between 1985 and October 2011. The reviewed studies were found using PubMed and PsycInfo searches. The review focuses on studies that have examined the influence of acutely induced laboratory stress on decision making and that measured both decision-making performance and stress responses. Additionally, some studies that investigated decision making under naturally occurring stress levels and decision-making abilities in patients who suffer from stress-related disorders are described. The results from the studies that were included in the review support the assumption that stress affects decision making. If stress confers an advantage or disadvantage in terms of outcome depends on the specific task or situation. The results also emphasize the role of mediating and moderating variables. The results are discussed with respect to underlying psychological and neural mechanisms, implications for everyday decision making and future research directions.",
"title": ""
},
{
"docid": "ea765da47c4280f846fe144570a755dc",
"text": "A new nonlinear noise reduction method is presented that uses the discrete wavelet transform. Similar to Donoho (1995) and Donohoe and Johnstone (1994, 1995), the authors employ thresholding in the wavelet transform domain but, following a suggestion by Coifman, they use an undecimated, shift-invariant, nonorthogonal wavelet transform instead of the usual orthogonal one. This new approach can be interpreted as a repeated application of the original Donoho and Johnstone method for different shifts. The main feature of the new algorithm is a significantly improved noise reduction compared to the original wavelet based approach. This holds for a large class of signals, both visually and in the l/sub 2/ sense, and is shown theoretically as well as by experimental results.",
"title": ""
},
{
"docid": "4427f79777bfe5ea1617f06a5aa6f0cc",
"text": "Despite decades of sustained effort, memory corruption attacks continue to be one of the most serious security threats faced today. They are highly sought after by attackers, as they provide ultimate control --- the ability to execute arbitrary low-level code. Attackers have shown time and again their ability to overcome widely deployed countermeasures such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) by crafting Return Oriented Programming (ROP) attacks. Although Turing-complete ROP attacks have been demonstrated in research papers, real-world ROP payloads have had a more limited objective: that of disabling DEP so that injected native code attacks can be carried out. In this paper, we provide a systematic defense, called Control Flow and Code Integrity (CFCI), that makes injected native code attacks impossible. CFCI achieves this without sacrificing compatibility with existing software, the need to replace system programs such as the dynamic loader, and without significant performance penalty. We will release CFCI as open-source software by the time of this conference.",
"title": ""
},
{
"docid": "3969a0156c558020ca1de3b978c3ab4e",
"text": "Silver-Russell syndrome (SRS) and Beckwith-Wiedemann syndrome (BWS) are 2 clinically opposite growth-affecting disorders belonging to the group of congenital imprinting disorders. The expression of both syndromes usually depends on the parental origin of the chromosome in which the imprinted genes reside. SRS is characterized by severe intrauterine and postnatal growth retardation with various additional clinical features such as hemihypertrophy, relative macrocephaly, fifth finger clinodactyly, and triangular facies. BWS is an overgrowth syndrome with many additional clinical features such as macroglossia, organomegaly, and an increased risk of childhood tumors. Both SRS and BWS are clinically and genetically heterogeneous, and for clinical diagnosis, different diagnostic scoring systems have been developed. Six diagnostic scoring systems for SRS and 4 for BWS have been previously published. However, neither syndrome has common consensus diagnostic criteria yet. Most cases of SRS and BWS are associated with opposite epigenetic or genetic abnormalities in the 11p15 chromosomal region leading to opposite imbalances in the expression of imprinted genes. SRS is also caused by maternal uniparental disomy 7, which is usually identified in 5-10% of the cases, and is therefore the first imprinting disorder that affects 2 different chromosomes. In this review, we describe in detail the clinical diagnostic criteria and scoring systems as well as molecular causes in both SRS and BWS.",
"title": ""
},
{
"docid": "65aa27cc08fd1f3532f376b536c452ba",
"text": "Design work and design knowledge in Information Systems (IS) is important for both research and practice. Yet there has been comparatively little critical attention paid to the problem of specifying design theory so that it can be communicated, justified, and developed cumulatively. In this essay we focus on the structural components or anatomy of design theories in IS as a special class of theory. In doing so, we aim to extend the work of Walls, Widemeyer and El Sawy (1992) on the specification of information systems design theories (ISDT), drawing on other streams of thought on design research and theory to provide a basis for a more systematic and useable formulation of these theories. We identify eight separate components of design theories: (1) purpose and scope, (2) constructs, (3) principles of form and function, (4) artifact mutability, (5) testable propositions, (6) justificatory knowledge (kernel theories), (7) principles of implementation, and (8) an expository instantiation. This specification includes components missing in the Walls et al. adaptation of Dubin (1978) and Simon (1969) and also addresses explicitly problems associated with the role of instantiations and the specification of design theories for methodologies and interventions as well as for products and applications. The essay is significant as the unambiguous establishment of design knowledge as theory gives a sounder base for arguments for the rigor and legitimacy of IS as an applied discipline and for its continuing progress. A craft can proceed with the copying of one example of a design artifact by one artisan after another. A discipline cannot.",
"title": ""
}
] | scidocsrr |
6f43e36f14c9bb4d8535b80604301c64 | Development and validation of the Self-Regulation of Eating Behaviour Questionnaire for adults | [
{
"docid": "b1dd830adf87c283ff58630eade75b3c",
"text": "Self-control is a central function of the self and an important key to success in life. The exertion of self-control appears to depend on a limited resource. Just as a muscle gets tired from exertion, acts of self-control cause short-term impairments (ego depletion) in subsequent self-control, even on unrelated tasks. Research has supported the strength model in the domains of eating, drinking, spending, sexuality, intelligent thought, making choices, and interpersonal behavior. Motivational or framing factors can temporarily block the deleterious effects of being in a state of ego depletion. Blood glucose is an important component of the energy. KEYWORDS—self-control; ego depletion; willpower; impulse; strength Every day, people resist impulses to go back to sleep, to eat fattening or forbidden foods, to say or do hurtful things to their relationship partners, to play instead of work, to engage in inappropriate sexual or violent acts, and to do countless other sorts of problematic behaviors—that is, ones that might feel good immediately or be easy but that carry long-term costs or violate the rules and guidelines of proper behavior. What enables the human animal to follow rules and norms prescribed by society and to resist doing what it selfishly wants? Self-control refers to the capacity for altering one’s own responses, especially to bring them into line with standards such as ideals, values, morals, and social expectations, and to support the pursuit of long-term goals. Many writers use the terms selfcontrol and self-regulation interchangeably, but those whomake a distinction typically consider self-control to be the deliberate, conscious, effortful subset of self-regulation. In contrast, homeostatic processes such as maintaining a constant body temperature may be called self-regulation but not self-control. Self-control enables a person to restrain or override one response, thereby making a different response possible. Self-control has attracted increasing attention from psychologists for two main reasons. At the theoretical level, self-control holds important keys to understanding the nature and functions of the self. Meanwhile, the practical applications of self-control have attracted study in many contexts. Inadequate self-control has been linked to behavioral and impulse-control problems, including overeating, alcohol and drug abuse, crime and violence, overspending, sexually impulsive behavior, unwanted pregnancy, and smoking (e.g., Baumeister, Heatherton, & Tice, 1994; Gottfredson & Hirschi, 1990; Tangney, Baumeister, & Boone, 2004; Vohs & Faber, 2007). It may also be linked to emotional problems, school underachievement, lack of persistence, various failures at task performance, relationship problems and dissolution, and more.",
"title": ""
}
] | [
{
"docid": "675a6a2c303fa36975296898568b8ae3",
"text": "Recent work suggests that wings can be used to prolong the jumps of miniature jumping robots. However, no functional miniature jumping robot has been presented so far that can successfully apply this hybrid locomotion principle. In this publication, we present the development and characterization of the ‘EPFL jumpglider’, a miniature robot that can prolong its jumps using steered hybrid jumping and gliding locomotion over varied terrain. For example, it can safely descend from elevated positions such as stairs and buildings and propagate on ground with small jumps. The publication presents a systematic evaluation of three biologically inspired wing folding mechanisms and a rigid wing design. Based on this evaluation, two wing designs are implemented and compared1.",
"title": ""
},
{
"docid": "7223f14d3ea2d10661185c8494b81438",
"text": "In 1990 the molecular basis for a hereditary disorder in humans, hyperkalemic periodic paralysis, was first genetically demonstrated to be impaired ion channel function. Since then over a dozen diseases, now termed as channelopathies, have been described. Most of the disorders affect excitable tissue such as muscle and nerve; however, kidney diseases have also been described. Basic research on structure-function relationships and physiology of excitation has benefited tremendously from the discovery of disease-causing mutations pointing to regions of special significance within the channel proteins. This course focuses mainly on the clinical and genetic features of neurological disturbances in humans caused by genetic defects in voltage-gated sodium, calcium, potassium, and chloride channels. Disorders of skeletal muscle are by far the most studied and therefore more detailed in this text than the neuronal channelopathies which have been discovered only very recently. Review literature may be found in the attached reference list [1–12]. Skeletal muscle sodium channelopathies",
"title": ""
},
{
"docid": "1ce647f5e36c07745c512ed856a9d517",
"text": "This paper describes a discussion-bot that provides answers to students' discussion board questions in an unobtrusive and human-like way. Using information retrieval and natural language processing techniques, the discussion-bot identifies the questioner's interest, mines suitable answers from an annotated corpus of 1236 archived threaded discussions and 279 course documents and chooses an appropriate response. A novel modeling approach was designed for the analysis of archived threaded discussions to facilitate answer extraction. We compare a self-out and an all-in evaluation of the mined answers. The results show that the discussion-bot can begin to meet students' learning requests. We discuss directions that might be taken to increase the effectiveness of the question matching and answer extraction algorithms. The research takes place in the context of an undergraduate computer science course.",
"title": ""
},
{
"docid": "e749b355c41ca254a0ee249d7c4e9ab1",
"text": "This paper explores a framework to permit the creation of modules as part of a robot creation and combat game. We explore preliminary work that offers a design solution to generate and test robots comprised of modular components. This current implementation, which is reliant on a constraint-driven process is then assessed to indicate the expressive range of content it can create and the total number of unique combinations it can establish.",
"title": ""
},
{
"docid": "78b371e7df39a1ebbad64fdee7303573",
"text": "This state of the art report focuses on glyph-based visualization, a common form of visual design where a data set is depicted by a collection of visual objects referred to as glyphs. Its major strength is that patterns of multivariate data involving more than two attribute dimensions can often be more readily perceived in the context of a spatial relationship, whereas many techniques for spatial data such as direct volume rendering find difficult to depict with multivariate or multi-field data, and many techniques for non-spatial data such as parallel coordinates are less able to convey spatial relationships encoded in the data. This report fills several major gaps in the literature, drawing the link between the fundamental concepts in semiotics and the broad spectrum of glyph-based visualization, reviewing existing design guidelines and implementation techniques, and surveying the use of glyph-based visualization in many applications.",
"title": ""
},
{
"docid": "56d9c09cc01854e0be889e63f512165a",
"text": "CONTEXT\nRapid opioid detoxification with opioid antagonist induction using general anesthesia has emerged as an expensive, potentially dangerous, unproven approach to treat opioid dependence.\n\n\nOBJECTIVE\nTo determine how anesthesia-assisted detoxification with rapid antagonist induction for heroin dependence compared with 2 alternative detoxification and antagonist induction methods.\n\n\nDESIGN, SETTING, AND PATIENTS\nA total of 106 treatment-seeking heroin-dependent patients, aged 21 through 50 years, were randomly assigned to 1 of 3 inpatient withdrawal treatments over 72 hours followed by 12 weeks of outpatient naltrexone maintenance with relapse prevention psychotherapy. This randomized trial was conducted between 2000 and 2003 at Columbia University Medical Center's Clinical Research Center. Outpatient treatment occurred at the Columbia University research service for substance use disorders. Patients were included if they had an American Society of Anesthesiologists physical status of I or II, were without major comorbid psychiatric illness, and were not dependent on other drugs or alcohol.\n\n\nINTERVENTIONS\nAnesthesia-assisted rapid opioid detoxification with naltrexone induction, buprenorphine-assisted rapid opioid detoxification with naltrexone induction, and clonidine-assisted opioid detoxification with delayed naltrexone induction.\n\n\nMAIN OUTCOME MEASURES\nWithdrawal severity scores on objective and subjective scales; proportions of patients receiving naltrexone, completing inpatient detoxification, and retained in treatment; proportion of opioid-positive urine specimens.\n\n\nRESULTS\nMean withdrawal severities were comparable across the 3 treatments. Compared with clonidine-assisted detoxification, the anesthesia- and buprenorphine-assisted detoxification interventions had significantly greater rates of naltrexone induction (94% anesthesia, 97% buprenorphine, and 21% clonidine), but the groups did not differ in rates of completion of inpatient detoxification. Treatment retention over 12 weeks was not significantly different among groups with 7 of 35 (20%) retained in the anesthesia-assisted group, 9 of 37 (24%) in the buprenorphine-assisted group, and 3 of 34 (9%) in the clonidine-assisted group. Induction with 50 mg of naltrexone significantly reduced the risk of dropping out (odds ratio, 0.28; 95% confidence interval, 0.15-0.51). There were no significant group differences in proportions of opioid-positive urine specimens. The anesthesia procedure was associated with 3 potentially life-threatening adverse events.\n\n\nCONCLUSION\nThese data do not support the use of general anesthesia for heroin detoxification and rapid opioid antagonist induction.",
"title": ""
},
{
"docid": "3a31192482674f400e6230f35c7bfe38",
"text": "This paper introduces Parsing to Programs, a framework that combines ideas from parsing and probabilistic programming for situated question answering. As a case study, we build a system that solves pre-university level Newtonian physics questions. Our approach represents domain knowledge of Newtonian physics as programs. When presented with a novel question, the system learns a formal representation of the question by combining interpretations from the question text and any associated diagram. Finally, the system uses this formal representation to solve the questions using the domain knowledge. We collect a new dataset of Newtonian physics questions from a number of textbooks and use it to train our system. The system achieves near human performance on held-out textbook questions and section 1 of AP Physics C mechanics - both on practice questions as well as on freely available actual exams held in 1998 and 2012.",
"title": ""
},
{
"docid": "89835907e8212f7980c35ae12d711339",
"text": "In this letter, a novel ultra-wideband (UWB) bandpass filter with compact size and improved upper-stopband performance has been studied and implemented using multiple-mode resonator (MMR). The MMR is formed by attaching three pairs of circular impedance-stepped stubs in shunt to a high impedance microstrip line. By simply adjusting the radius of the circles of the stubs, the resonant modes of the MMR can be roughly allocated within the 3.1-10.6 GHz UWB band while suppressing the spurious harmonics in the upper-stopband. In order to enhance the coupling degree, two interdigital coupled-lines are used in the input and output sides. Thus, a predicted UWB passband is realized. Meanwhile, the insertion loss is higher than 30.0 dB in the upper-stopband from 12.1 to 27.8 GHz. Finally, the filter is successfully designed and fabricated. The EM-simulated and the measured results are presented in this work where excellent agreement between them is obtained.",
"title": ""
},
{
"docid": "d2b7ff4fc41610013b98a70fc32c8176",
"text": "Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which require the network to read from extremely large memories. On the other hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks. The memory is organized in a hierarchical structure such that reading from it is done with less computation than soft attention over a flat memory, while also being easier to train than hard attention over a flat memory. Specifically, we propose to incorporate Maximum Inner Product Search (MIPS) in the training and inference procedures for our hierarchical memory network. We explore the use of various state-of-the art approximate MIPS techniques and report results on SimpleQuestions, a challenging large scale factoid question answering task.",
"title": ""
},
{
"docid": "caead07ebeea66cb5d8e57c956a11289",
"text": "End-to-end bandwidth estimation tools like Iperf though fairly accurate are intrusive. In this paper, we describe how with an instrumented TCP stack (Web100), we can estimate the end-to-end bandwidth accurately, while consuming significantly less network bandwidth and time. We modified Iperf to use Web100 to detect the end of slow-start and estimate the end-toend bandwidth by measuring the amount of data sent for a short period (1 second) after the slow-start, when the TCP throughput is relatively stable. We obtained bandwidth estimates differing by less than 10% when compared to running Iperf for 20 seconds, and savings in bandwidth estimation time of up to 94% and savings in network traffic of up to 92%.",
"title": ""
},
{
"docid": "9846794c512f847ca16c43bcf055a757",
"text": "Sensing and presenting on-road information of moving vehicles is essential for fully and semi-automated driving. It is challenging to track vehicles from affordable on-board cameras in crowded scenes. The mismatch or missing data are unavoidable and it is ineffective to directly present uncertain cues to support the decision-making. In this paper, we propose a physical model based on incompressible fluid dynamics to represent the vehicle’s motion, which provides hints of possible collision as a continuous scalar riskmap. We estimate the position and velocity of other vehicles from a monocular on-board camera located in front of the ego-vehicle. The noisy trajectories are then modeled as the boundary conditions in the simulation of advection and diffusion process. We then interactively display the animating distribution of substances, and show that the continuous scalar riskmap well matches the perception of vehicles even in presence of the tracking failures. We test our method on real-world scenes and discuss about its application for driving assistance and autonomous vehicle in the future.",
"title": ""
},
{
"docid": "1bc33dcf86871e70bd3b7856fd3c3857",
"text": "A framework for clustered-dot color halftone watermarking is proposed. Watermark patterns are embedded in the color halftone on per-separation basis. For typical CMYK printing systems, common desktop RGB color scanners are unable to provide the individual colorant halftone separations, which confounds per-separation detection methods. Not only does the K colorant consistently appear in the scanner channels as it absorbs uniformly across the spectrum, but cross-couplings between CMY separations are also observed in the scanner color channels due to unwanted absorptions. We demonstrate that by exploiting spatial frequency and color separability of clustered-dot color halftones, estimates of the individual colorant halftone separations can be obtained from scanned RGB images. These estimates, though not perfect, allow per-separation detection to operate efficiently. The efficacy of this methodology is demonstrated using continuous phase modulation for the embedding of per-separation watermarks.",
"title": ""
},
{
"docid": "6d7188bd9d7a9a6c80c573d6184d467d",
"text": "Background: Feedback of the weak areas of knowledge in RPD using continuous competency or other test forms is very essential to develop the student knowledge and the syllabus as well. This act should be a regular practice. Aim: To use the outcome of competency test and the objectives structured clinical examination of removable partial denture as a reliable measure to provide a continuous feedback to the teaching system. Method: This sectional study was performed on sixty eight, fifth year students for the period from 2009 to 2010. The experiment was divided into two parts: continuous assessment and the final examination. In the first essay; some basic removable partial denture knowledge, surveying technique, and designing of the metal framework were used to estimate the learning outcome. While in the second essay, some components of the objectives structured clinical examination were compared to the competency test to see the difference in learning outcome. Results: The students’ performance was improved in the final assessment just in some aspects of removable partial denture. However, for the surveying, the students faced some problems. Conclusion: the continuous and final tests can provide a simple tool to advice the teachers for more effective teaching of the RPD. So that the weakness in specific aspects of the RPD syllabus can be detected and corrected continuously from the beginning, during and at the end of the course.",
"title": ""
},
{
"docid": "a2adeb9448c699bbcbb10d02a87e87a5",
"text": "OBJECTIVE\nTo quantify the presence of health behavior theory constructs in iPhone apps targeting physical activity.\n\n\nMETHODS\nThis study used a content analysis of 127 apps from Apple's (App Store) Health & Fitness category. Coders downloaded the apps and then used an established theory-based instrument to rate each app's inclusion of theoretical constructs from prominent behavior change theories. Five common items were used to measure 20 theoretical constructs, for a total of 100 items. A theory score was calculated for each app. Multiple regression analysis was used to identify factors associated with higher theory scores.\n\n\nRESULTS\nApps were generally observed to be lacking in theoretical content. Theory scores ranged from 1 to 28 on a 100-point scale. The health belief model was the most prevalent theory, accounting for 32% of all constructs. Regression analyses indicated that higher priced apps and apps that addressed a broader activity spectrum were associated with higher total theory scores.\n\n\nCONCLUSION\nIt is not unexpected that apps contained only minimal theoretical content, given that app developers come from a variety of backgrounds and many are not trained in the application of health behavior theory. The relationship between price and theory score corroborates research indicating that higher quality apps are more expensive. There is an opportunity for health and behavior change experts to partner with app developers to incorporate behavior change theories into the development of apps. These future collaborations between health behavior change experts and app developers could foster apps superior in both theory and programming possibly resulting in better health outcomes.",
"title": ""
},
{
"docid": "0481ce9aea9060b317d3977417257799",
"text": "Severe issues about data acquisition and management arise during the design creation and development due to complexity, uncertainty and ambiguity. BIM (Building Information Modelling) is a tool for a team based lean design approach towards improved architectural practice across the supply chain. However, moving from a CAD (Computer Aided Design) approach to BIM (Building Information Modelling) represents a fundamental change for individual disciplines and the construction industry as a whole. Although BIM has been implemented by large practices, it is not widely used by SMEs (Small and Medium Sized Enterprises). Purpose: This paper aims to present a systematic approach for BIM implementation for Architectural SMEs at the organizational level Design/Methodology/Approach: The research is undertaken through a KTP (Knowledge transfer Partnership) project between the University of Salford and John McCall Architects (JMA) a SME based in Liverpool. The overall aim of the KTP is to develop lean design practice through BIM adoption. The BIM implementation approach uses a socio-technical view which does not only consider the implementation of technology but also considers the socio-cultural environment that provides the context for its implementation. The action research oriented qualitative and quantitative research is used for discovery, comparison, and experimentation as it provides “learning by doing”. Findings: The strategic approach to BIM adoption incorporated people, process and technology equally and led to capacity building through the improvements in process, technological infrastructure and upskilling of JMA staff to attain efficiency gains and competitive advantages. Originality/Value : This paper introduces a systematic approach for BIM adoption based on the action research philosophy and demonstrates a roadmap for BIM adoption at the operational level for SME companies.",
"title": ""
},
{
"docid": "53f9f38400266da916dd10200b6b4df1",
"text": "Time series prediction has been studied in a variety of domains. However, it is still challenging to predict future series given historical observations and past exogenous data. Existing methods either fail to consider the interactions among different components of exogenous variables which may affect the prediction accuracy, or cannot model the correlations between exogenous data and target data. Besides, the inherent temporal dynamics of exogenous data are also related to the target series prediction, and thus should be considered as well. To address these issues, we propose an end-to-end deep learning model, i.e., Hierarchical attention-based Recurrent Highway Network (HRHN), which incorporates spatio-temporal feature extraction of exogenous variables and temporal dynamics modeling of target variables into a single framework. Moreover, by introducing the hierarchical attention mechanism, HRHN can adaptively select the relevant exogenous features in different semantic levels. We carry out comprehensive empirical evaluations with various methods over several datasets, and show that HRHN outperforms the state of the arts in time series prediction, especially in capturing sudden changes and sudden oscillations of time series.",
"title": ""
},
{
"docid": "bff21b4a0bc4e7cc6918bc7f107a5ca5",
"text": "This paper discusses driving system design based on traffic rules. This allows fully automated driving in an environment with human drivers, without necessarily changing equipment on other vehicles or infrastructure. It also facilitates cooperation between the driving system and the host driver during highly automated driving. The concept, referred to as legal safety, is illustrated for highly automated driving on highways with distance keeping, intelligent speed adaptation, and lane-changing functionalities. Requirements by legal safety on perception and control components are discussed. This paper presents the actual design of a legal safety decision component, which predicts object trajectories and calculates optimal subject trajectories. System implementation on automotive electronic control units and results on vehicle and simulator are discussed.",
"title": ""
},
{
"docid": "af691c2ca5d9fd1ca5109c8b2e7e7b6d",
"text": "As social robots become more widely used as educational tutoring agents, it is important to study how children interact with these systems, and how effective they are as assessed by learning gains, sustained engagement, and perceptions of the robot tutoring system as a whole. In this paper, we summarize our prior work involving a long-term child-robot interaction study and outline important lessons learned regarding individual differences in children. We then discuss how these lessons inform future research in child-robot interaction.",
"title": ""
},
{
"docid": "df0d231f8b60d22f6b5b0d1cfcef968c",
"text": "In recent years, Big Data has created significant opportunities for academic research in a wide range of topics within the social sciences. We contribute to this growing field by exploiting the unique social media data from Glassdoor.com. We extract anonymous employee reviews for textual analysis to reveal the relation between employee satisfaction and company performance. Using categories from corporate value studies, our analysis not only provide a “bird’s eye view,” but also provide specific aspects of employee satisfaction are responsible for driving these correlations. We found that while Innovation is the most important category for technology industry, Quality category drives retailing and financial industry. We confirmed the significant correlation between overall employee satisfaction and corporate performance and discovered categories that are negatively correlated with performance: Safety, Communication and Integrity. We hope that this research encourages other researchers to consider the rich environthat a text analytics methodology makes possible.",
"title": ""
},
{
"docid": "5d88f5a18d3e4961eee6e9ed6db62817",
"text": "“Style transfer” among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.",
"title": ""
}
] | scidocsrr |
51daa90398d59d92015166b7fbbfd226 | Data-driven advice for applying machine learning to bioinformatics problems | [
{
"docid": "40f21a8702b9a0319410b716bda0a11e",
"text": "A number of supervised learning methods have been introduced in the last decade. Unfortunately, the last comprehensive empirical evaluation of supervised learning was the Statlog Project in the early 90's. We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. We also examine the effect that calibrating the models via Platt Scaling and Isotonic Regression has on their performance. An important aspect of our study is the use of a variety of performance criteria to evaluate the learning methods.",
"title": ""
},
{
"docid": "71b5c8679979cccfe9cad229d4b7a952",
"text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"title": ""
}
] | [
{
"docid": "27bed0efd42918f783e16ca0cf0b8c4a",
"text": "This report documents the program and the outcomes of Dagstuhl Seminar 17301 “User-Generated Content in Social Media”. Social media have a profound impact on individuals, businesses, and society. As users post vast amounts of text and multimedia content every minute, the analysis of this user generated content (UGC) can offer insights to individual and societal concerns and could be beneficial to a wide range of applications. In this seminar, we brought together researchers from different subfields of computer science, such as information retrieval, multimedia, natural language processing, machine learning and social media analytics. We discussed the specific properties of UGC, the general research tasks currently operating on this type of content, identifying their limitations, and imagining new types of applications. We formed two working groups, WG1 “Fake News and Credibility”, WG2 “Summarizing and Story Telling from UGC”. WG1 invented an “Information Nutrition Label” that characterizes a document by different features such as e.g. emotion, opinion, controversy, and topicality; For computing these feature values, available methods and open research issues were identified. WG2 developed a framework for summarizing heterogeneous, multilingual and multimodal data, discussed key challenges and applications of this framework. Seminar July 23–28, 2017 – http://www.dagstuhl.de/17301 1998 ACM Subject Classification H Information Systems, H.5 Information Interfaces and Presentation, H.5.1 Multimedia Information Systems, H.3 Information Storage and Retrieval, H.1 Models and principles, I Computing methodologies, I.2 Artificial Intelligence, I.2.6 Learning, I.2.7 Natural language processing, J Computer Applications, J.4 Social and behavioural sciences, K Computing Milieux, K.4 Computers and Society, K.4.1 Public policy issues",
"title": ""
},
{
"docid": "69bb10420be07fe9fb0fd372c606d04e",
"text": "Contextual text mining is concerned with extracting topical themes from a text collection with context information (e.g., time and location) and comparing/analyzing the variations of themes over different contexts. Since the topics covered in a document are usually related to the context of the document, analyzing topical themes within context can potentially reveal many interesting theme patterns. In this paper, we generalize some of these models proposed in the previous work and we propose a new general probabilistic model for contextual text mining that can cover several existing models as special cases. Specifically, we extend the probabilistic latent semantic analysis (PLSA) model by introducing context variables to model the context of a document. The proposed mixture model, called contextual probabilistic latent semantic analysis (CPLSA) model, can be applied to many interesting mining tasks, such as temporal text mining, spatiotemporal text mining, author-topic analysis, and cross-collection comparative analysis. Empirical experiments show that the proposed mixture model can discover themes and their contextual variations effectively.",
"title": ""
},
{
"docid": "52e1acca8a09cec2a97822dc24d0ed7b",
"text": "In this paper virtual teams are defined as living systems and as such made up of people with different needs and characteristics. Groups generally perform better when they are able to establish a high level of group cohesion. According to Druskat and Wolff [2001] this status can be reached by establishing group emotional intelligence. Group emotional intelligence is reached via interactions among members and the interactions are allowed through the disposable linking factors. Virtual linking factors differ from traditional linking factors; therefore, the concept of Virtual Emotional Intelligence is here introduced in order to distinguish the group cohesion reaching process in virtual teams.",
"title": ""
},
{
"docid": "9de00d8cf6b3001f976fa49c42875620",
"text": "This paper is a preliminary report on the efficiency of two strategies of data reduction in a data preprocessing stage. In the first experiment, we apply the Count-Min sketching algorithm, while in the second experiment we discretize our data prior to applying the Count-Min algorithm. By conducting a discretization before sketching, the need for the increased number of buckets in sketching is reduced. This preliminary attempt of combining two methods with the same purpose has shown potential. In our experiments, we use sensor data collected to study the environmental fluctuation and its impact on the quality of fresh peaches and nectarines in cold chain.",
"title": ""
},
{
"docid": "1c1cc9d6b538fda6d2a38ff1dcce7085",
"text": "Major speech production models from speech science literature and a number of popular statistical “generative” models of speech used in speech technology are surveyed. Strengths and weaknesses of these two styles of speech models are analyzed, pointing to the need to integrate the respective strengths while eliminating the respective weaknesses. As an example, a statistical task-dynamic model of speech production is described, motivated by the original deterministic version of the model and targeted for integrated-multilingual speech recognition applications. Methods for model parameter learning (training) and for likelihood computation (recognition) are described based on statistical optimization principles integrated in neural network and dynamic system theories.",
"title": ""
},
{
"docid": "f4bdd6416013dfd2b552efef9c1b22e9",
"text": "ABSTRACT\nTraumatic hemipelvectomy is an uncommon and life threatening injury. We report a case of a 16-year-old boy involved in a traffic accident who presented with an almost circumferential pelvic wound with wide diastasis of the right sacroiliac joint and symphysis pubis. The injury was associated with complete avulsion of external and internal iliac vessels as well as the femoral and sciatic nerves. He also had ipsilateral open comminuted fractures of the femur and tibia. Emergency debridement and completion of amputation with preservation of the posterior gluteal flap and primary anastomosis of the inferior gluteal vessels to the internal iliac artery stump were performed. A free fillet flap was used to close the massive exposed area.\n\n\nKEY WORDS\ntraumatic hemipelvectomy, amputation, and free gluteus maximus fillet flap.",
"title": ""
},
{
"docid": "4e46fb5c1abb3379519b04a84183b055",
"text": "Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience.",
"title": ""
},
{
"docid": "5cd48ee461748d989c40f8e0f0aa9581",
"text": "Being able to identify which rhetorical relations (e.g., contrast or explanation) hold between spans of text is important for many natural language processing applications. Using machine learning to obtain a classifier which can distinguish between different relations typically depends on the availability of manually labelled training data, which is very time-consuming to create. However, rhetorical relations are sometimes lexically marked, i.e., signalled by discourse markers (e.g., because, but, consequently etc.), and it has been suggested (Marcu and Echihabi, 2002) that the presence of these cues in some examples can be exploited to label them automatically with the corresponding relation. The discourse markers are then removed and the automatically labelled data are used to train a classifier to determine relations even when no discourse marker is present (based on other linguistic cues such as word co-occurrences). In this paper, we investigate empirically how feasible this approach is. In particular, we test whether automatically labelled, lexically marked examples are really suitable training material for classifiers that are then applied to unmarked examples. Our results suggest that training on this type of data may not be such a good strategy, as models trained in this way do not seem to generalise very well to unmarked data. Furthermore, we found some evidence that this behaviour is largely independent of the classifiers used and seems to lie in the data itself (e.g., marked and unmarked examples may be too dissimilar linguistically and removing unambiguous markers in the automatic labelling process may lead to a meaning shift in the examples).",
"title": ""
},
{
"docid": "601748e27c7b3eefa4ff29252b42bf93",
"text": "A simple, fast method is presented for the interpolation of texture coordinates and shading parameters for polygons viewed in perspective. The method has application in scan conversion algorithms like z-bu er and painter's algorithms that perform screen space interpolation of shading parameters such as texture coordinates, colors, and normal vectors. Some previous methods perform linear interpolation in screen space, but this is rotationally variant, and in the case of texture mapping, causes a disturbing \\rubber sheet\" e ect. To correctly compute the nonlinear, projective transformation between screen space and parameter space, we use rational linear interpolation across the polygon, performing several divisions at each pixel. We present simpler formulas for setting up these interpolation computations, reducing the setup cost per polygon to nil and reducing the cost per vertex to a handful of divisions. Additional keywords: incremental, perspective, projective, a ne.",
"title": ""
},
{
"docid": "c227f76c42ae34af11193e3ecb224ecb",
"text": "Antibiotics and antibiotic resistance determinants, natural molecules closely related to bacterial physiology and consistent with an ancient origin, are not only present in antibiotic-producing bacteria. Throughput sequencing technologies have revealed an unexpected reservoir of antibiotic resistance in the environment. These data suggest that co-evolution between antibiotic and antibiotic resistance genes has occurred since the beginning of time. This evolutionary race has probably been slow because of highly regulated processes and low antibiotic concentrations. Therefore to understand this global problem, a new variable must be introduced, that the antibiotic resistance is a natural event, inherent to life. However, the industrial production of natural and synthetic antibiotics has dramatically accelerated this race, selecting some of the many resistance genes present in nature and contributing to their diversification. One of the best models available to understand the biological impact of selection and diversification are β-lactamases. They constitute the most widespread mechanism of resistance, at least among pathogenic bacteria, with more than 1000 enzymes identified in the literature. In the last years, there has been growing concern about the description, spread, and diversification of β-lactamases with carbapenemase activity and AmpC-type in plasmids. Phylogenies of these enzymes help the understanding of the evolutionary forces driving their selection. Moreover, understanding the adaptive potential of β-lactamases contribute to exploration the evolutionary antagonists trajectories through the design of more efficient synthetic molecules. In this review, we attempt to analyze the antibiotic resistance problem from intrinsic and environmental resistomes to the adaptive potential of resistance genes and the driving forces involved in their diversification, in order to provide a global perspective of the resistance problem.",
"title": ""
},
{
"docid": "4927fee47112be3d859733c498fbf594",
"text": "To design effective tools for detecting and recovering from software failures requires a deep understanding of software bug characteristics. We study software bug characteristics by sampling 2,060 real world bugs in three large, representative open-source projects—the Linux kernel, Mozilla, and Apache. We manually study these bugs in three dimensions—root causes, impacts, and components. We further study the correlation between categories in different dimensions, and the trend of different types of bugs. The findings include: (1) semantic bugs are the dominant root cause. As software evolves, semantic bugs increase, while memory-related bugs decrease, calling for more research effort to address semantic bugs; (2) the Linux kernel operating system (OS) has more concurrency bugs than its non-OS counterparts, suggesting more effort into detecting concurrency bugs in operating system code; and (3) reported security bugs are increasing, and the majority of them are caused by semantic bugs, suggesting more support to help developers diagnose and fix security bugs, especially semantic security bugs. In addition, to reduce the manual effort in building bug benchmarks for evaluating bug detection and diagnosis tools, we use machine learning techniques to classify 109,014 bugs automatically.",
"title": ""
},
{
"docid": "089ef4e4469554a4d4ef75089fe9c7be",
"text": "The attention of software vendors has moved recently to SMEs (smallto medium-sized enterprises), offering them a vast range of enterprise systems (ES), which were formerly adopted by large firms only. From reviewing information technology innovation adoption literature, it can be argued that IT innovations are highly differentiated technologies for which there is not necessarily a single adoption model. Additionally, the question of why one SME adopts an ES while another does not is still understudied. This study intends to fill this gap by investigating the factors impacting SME adoption of ES. A qualitative approach was adopted in this study involving key decision makers in nine SMEs in the Northwest of England. The contribution of this study is twofold: it provides a framework that can be used as a theoretical basis for studying SME adoption of ES, and it empirically examines the impact of the factors within this framework on SME adoption of ES. The findings of this study confirm that factors impacting the adoption of ES are different from factors impacting SME adoption of other previously studied IT innovations. Contrary to large companies that are mainly affected by organizational factors, this study shows that SMEs are not only affected by environmental factors as previously established, but also affected by technological and organizational factors.",
"title": ""
},
{
"docid": "0bd3beaad8cd6d6f19603eca9320718d",
"text": "For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Vercellis, Carlo. Business intelligence : data mining and optimization for decision making / Carlo Vercellis. p. cm. Includes bibliographical references and index.",
"title": ""
},
{
"docid": "af2ccb9d51cd28426fd4f03e7454d7bf",
"text": "How we categorize certain objects depends on the processes they afford: something is a vehicle because it affords transportation, a house because it offers shelter or a watercourse because water can flow in it. The hypothesis explored here is that image schemas (such as LINK, CONTAINER, SUPPORT, and PATH) capture abstractions that are essential to model affordances and, by implication, categories. To test the idea, I develop an algebraic theory formalizing image schemas and accounting for the role of affordances in categorizing spatial entities.",
"title": ""
},
{
"docid": "ae9219c7e3d85b7b8f83569d000a02bb",
"text": "This paper proposes a bidirectional switched-capacitor dc-dc converter for applications that require high voltage gain. Some of conventional switched-capacitor dc-dc converters have diverse voltage or current stresses for the switching devices in the circuit, not suitable for modular configuration or for high efficiency demand; some suffer from relatively high power loss or large device count for high voltage gain, even if the device voltage stress could be low. By contrast, the proposed dc-dc converter features low component (switching device and capacitor) power rating, small switching device count, and low output capacitance requirement. In addition to its low current stress, the combination of two short symmetric paths of charge pumps further lowers power loss. Therefore, a small and light converter with high voltage gain and high efficiency can be achieved. Simulation and experimental results of a 450-W prototype with a voltage conversion ratio of six validate the principle and features of this topology.",
"title": ""
},
{
"docid": "eb8d681fcfd5b18c15dd09738ab4717c",
"text": "Building a dialogue agent to fulfill complex tasks, such as travel planning, is challenging because the agent has to learn to collectively complete multiple subtasks. For example, the agent needs to reserve a hotel and book a flight so that there leaves enough time for commute between arrival and hotel check-in. This paper addresses this challenge by formulating the task in the mathematical framework of options over Markov Decision Processes (MDPs), and proposing a hierarchical deep reinforcement learning approach to learning a dialogue manager that operates at different temporal scales. The dialogue manager consists of (1) a top-level dialogue policy that selects among subtasks or options, (2) a low-level dialogue policy that selects primitive actions to complete the subtask given by the top-level policy, and (3) a global state tracker that helps ensure all cross-subtask constraints be satisfied. Experiments on a travel planning task with simulated and real users show that our approach leads to significant improvements over two baselines, one based on handcrafted rules and the other based on flat deep reinforcement learning.",
"title": ""
},
{
"docid": "71bc346237c5f97ac245dd7b7bbb497f",
"text": "Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed.",
"title": ""
},
{
"docid": "4f40700ccdc1b6a8a306389f1d7ea107",
"text": "Skin cancer exists in different forms like Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most dangerous and unpredictable. In this paper, we implement an image processing technique for the detection of Melanoma Skin Cancer using the software MATLAB which is easy for implementation as well as detection of Melanoma skin cancer. The input to the system is the skin lesion image. This image proceeds with the image pre-processing methods such as conversion of RGB image to Grayscale image, noise removal and so on. Further Otsu thresholding is used to segment the images followed by feature extraction that includes parameters like Asymmetry, Border Irregularity, Color and Diameter (ABCD) and then Total Dermatoscopy Score (TDS) is calculated. The calculation of TDS determines the presence of Melanoma skin cancer by classifying it as benign, suspicious or highly suspicious skin lesion.",
"title": ""
},
{
"docid": "419f031c3220676ba64c3ec983d4e160",
"text": "Volumetric muscle loss (VML) injuries exceed the considerable intrinsic regenerative capacity of skeletal muscle, resulting in permanent functional and cosmetic deficits. VML and VML-like injuries occur in military and civilian populations, due to trauma and surgery as well as due to a host of congenital and acquired diseases/syndromes. Current therapeutic options are limited, and new approaches are needed for a more complete functional regeneration of muscle. A potential solution is human hair-derived keratin (KN) biomaterials that may have significant potential for regenerative therapy. The goal of these studies was to evaluate the utility of keratin hydrogel formulations as a cell and/or growth factor delivery vehicle for functional muscle regeneration in a surgically created VML injury in the rat tibialis anterior (TA) muscle. VML injuries were treated with KN hydrogels in the absence and presence of skeletal muscle progenitor cells (MPCs), and/or insulin-like growth factor 1 (IGF-1), and/or basic fibroblast growth factor (bFGF). Controls included VML injuries with no repair (NR), and implantation of bladder acellular matrix (BAM, without cells). Initial studies conducted 8 weeks post-VML injury indicated that application of keratin hydrogels with growth factors (KN, KN+IGF-1, KN+bFGF, and KN+IGF-1+bFGF, n = 8 each) enabled a significantly greater functional recovery than NR (n = 7), BAM (n = 8), or the addition of MPCs to the keratin hydrogel (KN+MPC, KN+MPC+IGF-1, KN+MPC+bFGF, and KN+MPC+IGF-1+bFGF, n = 8 each) (p < 0.05). A second series of studies examined functional recovery for as many as 12 weeks post-VML injury after application of keratin hydrogels in the absence of cells. A significant time-dependent increase in functional recovery of the KN, KN+bFGF, and KN+IGF+bFGF groups was observed, relative to NR and BAM implantation, achieving as much as 90% of the maximum possible functional recovery. Histological findings from harvested tissue at 12 weeks post-VML injury documented significant increases in neo-muscle tissue formation in all keratin treatment groups as well as diminished fibrosis, in comparison to both BAM and NR. In conclusion, keratin hydrogel implantation promoted statistically significant and physiologically relevant improvements in functional outcomes post-VML injury to the rodent TA muscle.",
"title": ""
},
{
"docid": "f18a0ae573711eb97b9b4150d53182f3",
"text": "The Electrocardiogram (ECG) is commonly used to detect arrhythmias. Traditionally, a single ECG observation is used for diagnosis, making it difficult to detect irregular arrhythmias. Recent technology developments, however, have made it cost-effective to collect large amounts of raw ECG data over time. This promises to improve diagnosis accuracy, but the large data volume presents new challenges for cardiologists. This paper introduces ECGLens, an interactive system for arrhythmia detection and analysis using large-scale ECG data. Our system integrates an automatic heartbeat classification algorithm based on convolutional neural network, an outlier detection algorithm, and a set of rich interaction techniques. We also introduce A-glyph, a novel glyph designed to improve the readability and comparison of ECG signals. We report results from a comprehensive user study showing that A-glyph improves the efficiency in arrhythmia detection, and demonstrate the effectiveness of ECGLens in arrhythmia detection through two expert interviews.",
"title": ""
}
] | scidocsrr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.